![]() image encoding and decoding devices, image encoding and decoding methods and image prediction device
专利摘要:
IMAGE ENCODING AND DECODING DEVICES, IMAGE ENCODING AND DECODING METHODS AND IMAGE FORECASTING DEVICE When an intra-forecast unit (4) generates an intra-forecast image by performing an intra-frame forecasting process using an image signal encoded in a frame, a filter selection table is referred to, such that a filter is selected from the previously prepared one or more filters according to the conditions of several parameters related to the coding of a block to be filtered and the filter is used to perform filtering considering the forecast image. And thereby, locally generated forecasting errors can be reduced, and image quality can be increased. 公开号:BR112013016961A2 申请号:R112013016961-3 申请日:2012-01-06 公开日:2020-06-30 发明作者:Akira Minezawa;Kazuo Sugimoto;Shunichi Sekiguchi 申请人:Mitsubishi Electric Corporation; IPC主号:
专利说明:
"IMAGE ENCODING AND DECODING DEVICES, IMAGE ENCODING AND DECODING METHODS: IMAGE AND IMAGE PREVISION DEVICE" l FIELD OF THE INVENTION Ss The present invention relates to a moving image encoding device and a method of moving image encoding to encode a moving image with a high degree of efficiency, and a moving image decoding device and a moving image decoding method of decoding a coded moving image with a high degree of efficiency. BACKGROUND OF THE INVENTION For example, according to an international standard video encoding method, such as MPEG (Moving Image Expert Group) or "ITU-T H.26x", an input video frame is divided into blocks rectangular (target encoding blocks), a prediction process using an already encoded image signal is performed on each target encoding block to generate a prediction image, and orthogonal transform, and a quantization process is performed on an error signal prediction that is the difference between the target coding block and the forecast image in units of a block, such that information compression is performed in the input video frame. For example, in the case of MPEG-4 AVC / H.264 (ISO / IEC 14496-10 / ITU-T H.264) which is an international standard method, an intrapredictive process for adjacent encoded pixels, or a process - prediction of compensated movement between adjacent frames is carried out (for example, refer to the number 1 non-patent reference). In the case of MPEG-4 AVC / H.264, a prediction mode can be selected from a plurality of prediction modes for each block in a luminance intraprevision mode. Fig. 10 is an explanatory drawing showing intra-forecast modes in the case of a block size of 4x4 pixels for - luminance. In Fig. 10, each white circle shows a pixel in a block of: coding, and each black circle shows a pixel that is used for prediction, and that exists in an adjacent block already coded. In the example shown in Fig. 10, nine modes O to 8 are prepared as intra-forecast modes, and mode 2 is one in which an average forecast is performed in such a way that each pixel in the target coding block is predicted using the average of adjacent pixels in the upper and left blocks. The modes other than mode 2 are intra-forecast modes in each of which a directional forecast is performed. Mode 0 is one in which a vertical forecast is performed in such a way that adjacent pixels in the upper block are repeatedly replicated to create several lines of pixels along a vertical direction to generate a forecast image. For example, mode 0 is selected when the target coding block is a vertically strip pattern. Mode 1 is one in which a horizontal forecast is performed in such a way that adjacent pixels in the left block are repeatedly replicated to create multiple columns of pixels along a horizontal direction to generate a forecast image. For example, mode 1 is selected when the target —coding block is a horizontally strip pattern. In each of modes 3 to 8, interpolation pixels being executed in a predetermined direction (i.e., a direction shown by the arrows) are generated using the adjacent pixels in the upper block or the left block to generate a preview image. In this case, the block size for luminance to which an intraprevision is applied can be selected from 4x4 pixels, 8x8 pixels, and 16x16 pixels. In the case of 8x8 pixels, nine forecast modes are defined, as in the case of 4x4 pixels. In contrast to this, in the case of 16x16 pixels, four forecast modes that are called flat forecast are defined in addition to the forecast modes associated with a forecast - average, a vertical forecast, and a horizontal forecast. Each intraprevision associated with a flat forecast is a mode in which pixels created by interpolating diagonally in the pixels - adjacent in the upper block and the adjacent pixels in the left block are provided as predicted values. In an intrapredictive mode in which a directional forecast is performed, because predicted values are generated along a direction predetermined by the mode, eg, a 45 degree direction, the forecasting efficiency increases and the number of codes can be reduced when the direction of a margin (edge) of an object in a block coincides with the direction shown by the forecast mode. However, a slight shift can occur between the direction of an edge and the direction shown by the forecast mode, and even if the direction of an edge in the target coding block does not match the direction shown by the forecast mode, a big error Prediction can occur locally for the simple reason that the edge is slightly distorted (wobbled, bent, or the like). As a result, forecasting efficiency can drop dramatically. In order to prevent such a reduction in forecasting efficiency, when a directional forecast of 8x8 pixels is performed, a forecasting process is performed to generate a smoothed forecast image using adjacent encoded pixels in which a smoothing process has been carried out, and thereby reducing any slight shift in the forecast direction and forecast errors that occur when a slight distortion occurs at an edge. - Related technique document Non-patent reference Number 1 non-patent reference: MPEG-4 AVC standards (ISO / IEC 14496-10) / ITU-T H.264 SUMMARY OF THE INVENTION PROBLEMS TO BE SOLVED BY THE INVENTION - Because the conventional image encoding device is: built as above, the generation of a smoothed forecast image can reduce forecast errors that occur even if a slight shift - occurs in the forecast direction or a slight distortion occurs at an edge. However, according to the technique disclosed in the non-patent reference number 1, no smoothing process is performed on blocks other than 8x8 pixel blocks, and only one possible smoothing process is performed on the same 8x8 pixel blocks. One problem is that also in a block having a size different than 8x8 pixels, a big prediction error actually occurs locally due to a slight incompatibility at an edge even when the prediction image has a pattern similar to that of the image to be encoded, and, therefore, a big reduction occurs in forecasting efficiency. Another problem is that when a quantization parameter that is used when quantizing a forecast error signal, the position of each pixel in a block, the forecast mode, or the like, differs between blocks having the same size, a Proper process for reducing local forecasting errors differs between blocks, but only one possible smoothing process is - prepared, and therefore forecasting errors cannot be sufficiently reduced. An additional problem is that when performing an average forecast, a forecast signal for a pixel located on the edge of a block easily becomes discontinuous with that for adjacent encoded pixels because the average of adjacent pixels for the block is defined as - each of all the predicted values in the block, while because the image signal in general has a high spatial correlation, a prediction error easily occurs at the margin of the block due to the discontinuity mentioned above. The present invention is made in order to solve the problems mentioned above, and it is therefore an object of the present invention to provide a motion picture encoding device, a motion picture decoding device, a motion encoding method. moving image, and a 5 - motion image decoding method capable of reducing forecast errors that occur locally, and thereby be able to improve image quality. MEANS TO SOLVE THE PROBLEM In accordance with the present invention, a moving image encoding device is provided when an intraframe prediction process is performed to generate a prediction image using an image signal already encoded in a frame, a intraprevention unit selects a filter from one or more filters that are prepared in advance according to the states of various parameters associated with the coding of a target block to be filtered, performs a filtering process on the forecast image using the filter, and emits the forecast image in which the intra-forecast unit performed the filtering process, for a difference imaging unit. ADVANTAGES OF THE INVENTION As the moving image encoding device - according to the present invention is constructed in such a way, that when it performs an intraframe forecasting process to generate a forecast image using an image signal already encoded in a frame , the intraprevention unit selects a filter from one or more filters that are prepared in advance according to the states of various parameters - associated with the coding of a target block to be filtered, performs a filtering process in a forecast image using the filter, and outputs the forecast image in which the intra-forecast unit performed the filtering process, for the difference imaging unit, an advantage is provided of being able to reduce forecast errors occurring locally, and by means of addition, be able to improve image quality. - BRIEF DESCRIPTION OF THE FIGURES [Fig. 1] Fig. 1 is a block diagram showing a | moving image encoding device according to - Mode 1 of the present invention; [Fig. 2] Fig. 2 is a block diagram showing a moving image decoding device according to Mode 1 of the present invention; [Fig. 3] Fig. 3 is a flowchart showing processing performed by the moving image encoding device according to Mode 1 of the present invention; [Fig. 4] Fig. 4 is a flow chart showing processing performed by the moving image decoding device according to Mode 1 of the present invention; [Fig. 5] Fig. 5 is an explanatory drawing showing a state in which each coding block having a maximum size is hierarchically divided into a plurality of coding blocks; [Fig. 6] Fig. 6 (a) is an explanatory drawing showing a distribution of partitions in which a block to code is divided, and Fig. 6 (b) is an explanatory drawing showing a state in which an encoding mode m (B ") is assigned to each of the partitions after a hierarchical layer division is performed using a quaternary tree structure graph; [Fig. 7 ] Fig. 7 is an explanatory drawing showing an example of intraprediction parameters (intraprediction modes) that can be selected for each partition P; " in a B encoding block "; [Fig. 8] Fig. 8 is an explanatory drawing showing an example of pixels that are used when a predicted value for each pixel is generated on a P partition;" in the case of l; = m; "= 4; [Fig. 9] Fig. 9 is an explanatory drawing showing an example of the arrangement of reference pixels in the case of N = 5; [Fig. 10] Fig. 10 is an explanatory drawing showing modes | intra-forecast described in non-patent reference 1 in the case of a - block size of 4x4 pixels per luminance; [Fig. 11] Fig. 11 is an explanatory drawing showing an example of the distances between pixels already encoded in a frame, which are used when a preview image is generated, and each target pixel to be filtered; [Fig. 12] Fig. 12 is an explanatory drawing showing a concrete arrangement of reference pixels to be referenced by a filter; [Fig. 13] Fig. 13 is an explanatory drawing showing an example of a table to determine which filter should be used for each combination of an intrapredictive mode index and a partition size; [Fig. 14] Fig. 14 is an explanatory drawing showing an example of simplifying a filtering process when an average forecast is made; [Fig. 15] Fig. 15 is an explanatory drawing showing an example of a bit stream in which a filter selection table index is added for a sequence level header; [Fig. 16] Fig. 16 is an explanatory drawing showing an example of a bit stream in which a filter selection table index is added for a figure level header; [Fig. 17] Fig. 17 is an explanatory drawing showing an example of a bit stream in which a filter selection table index is added to a slice header; [Fig. 18] Fig. 18 is an explanatory drawing showing an example of a bit stream in which a filter selection table index is added to a reference block header; [Fig. 19] Fig. 19 is an explanatory drawing showing one. another example from the table, which differs from that shown in Fig. 13, to determine which filter should be used for each combination of an intra-forecast mode index and a partition size; and [Fig. 20] Fig. 20 is an explanatory drawing showing an example of a table to determine whether or not to perform a smoothing process on reference pixels when generating an intermediate forecast image for each combination of an index of - intrapreview mode and a partition size. MODALITIES OF THE INVENTION Hereinafter, in order to explain this invention in more detail, the preferred embodiments of the present invention will be described with reference to the accompanying drawings. Mode 1. In this modality | a moving image encoding device that enters each frame image of a video, performs an intraprediction process from adjacent encoded pixels, or a compensated movement prediction process between adjacent frames to generate an image of prediction, performs a compression process according to orthogonal transform and quantization in a forecast error signal that is an image of difference between the forecast image and the frame image, and, after that, performs variable length coding to generate a bit stream, and a moving image decoding device that decodes the stream of bits emitted from the moving image encoding device. The moving image encoding device according to this mode 1 is characterized by the fact that the moving image encoding device adapts to a local change of a video signal in directions of space and time to divide the video signal. video in regions of various sizes, and performs adaptive intraframe and interframe encoding. In general, a video signal has a characteristic of its complexity varying locally in space and time. | 5 - There may be a case in which a pattern having a uniform signal characteristic in a relatively large image region, such as a sky image or a wall image, or a pattern having a complicated texture pattern in an image region small, such as an image of a person or a figure including a fine texture, also coexists in a given video frame from the point of view of space. Also from a time point of view, a relatively large image area, such as a sky image or a wall image, has a small local change in a time direction in its pattern, while an image of a person or moving object has a greater temporal change because its outline has a movement of a rigid body and a movement of a non-rigid body with respect to time. Although a process of generating a forecast error signal having small signal strength and small entropy using temporal and spatial forecasting, and thereby reducing the total number of codes, is carried out in the coding process, the number of parameters used for the prediction can be reduced while the parameters can be applied uniformly to as large an image signal region as possible. On the other hand, because the amount of errors that occur in the forecast increases when the same forecast parameters are applied to an image signal pattern with a large change in time and space, the number of forecast error signal codes does not can be reduced. It is therefore desirable to reduce the size of a region that is subjected to the forecasting process when the forecasting process is carried out in an image signal pattern having a major change in time and space, and thereby reducing electricity and entropy of the forecast error signal even though the data volume of | parameters that are used for the forecast to be increased. In order to perform coding that is adapted to such characteristics typical of a signal | 5 - video, the moving image encoding device according to this modality | hierarchically divides each region having a predetermined maximum block size of the video signal into blocks, and performs the forecasting process and the encoding process of encoding a forecasting error in each of the blocks into which each region is divided. 10 A video signal that must be processed by the moving image encoding device according to this mode 1 can be an arbitrary video signal in which each video frame consists of a series of digital samples (pixels) in two dimensions, horizontal and vertical, such as a YUV signal consisting of a luminance signal and two color difference signals, a color video image signal in arbitrary color space, such as an RGB signal, emitted from a digital image sensor, a monochrome image signal, or an infrared image signal. The gradation of each pixel can be one of 8 bits, 10 bits or 12 bits. In the following explanation, the video signal entered is a YUV unless otherwise specified. It is also understood that two color difference components U and V are signals having a 4: 2: 0 format that are subsampled with respect to the Y luminance component. The data unit to be processed that corresponds to each frame of the video signal is referred to as a "figure". In this modality 1, a "figure" is explained as a video signal frame in which progressive scans were performed. When the video signal is an interlaced signal, a "picture" can alternatively be a field image signal, which is a unit that builds a video frame. Fig. 1 is a block diagram showing a moving image encoding device in accordance with Modality 1 of the - the present invention. Referring to Fig 1, a coding control part 1 performs a process of determining a maximum size of each of the coding blocks which is a unit to be processed at a time - when an intrapredictive process (process of forecasting of intraframe) or a compensated motion prediction process (interframe prediction process) is performed, and also determine an upper limit on the number of hierarchical layers, ie, a maximum hierarchical depth in a hierarchy in which each of the coding blocks having the maximum size is hierarchically divided into blocks. The coding control part 1 also performs a process of selecting a suitable coding mode for each of the coding blocks in which each coding block having the maximum size is divided hierarchically from one or more available coding modes (one or more modes of intracoding and one or more modes of intercoding). The coding control part 1 further performs a process of determining a quantization parameter and a transform block size that are used when a difference image is compressed for each coding block, and also determining intra-forecast parameters or interpreter parameters which are used when a forecasting process is performed for each coding block. The quantization parameter and the transform block size are included in the forecast error coding parameters, and these forecast error coding parameters are output to a transform / quantization part 7, to an inverse / transform part / inverse quantization 8, for a variable length coding part 13, and so on. The coding control part 1 builds a coding control unit. A block dividing part 2 performs a process of, when receiving a video signal showing an input image, dividing the input image shown by the video signal into the encoding blocks each - one having the maximum size determined by the control part of: coding 1, and also divide each coding block into blocks hierarchically until the number of hierarchical layers reaches the limit | 5 higher in the number of hierarchical layers that is determined by the coding control part 1. Block division part 2 builds a block division unit. A selection switch 3 performs a process when, when the coding mode selected by the coding control part 1 for the coding block, which is generated by dividing by the block dividing part 2, is an intracoding mode, emitting the coding block for an intraprevision part 4, and, when the coding mode selected by the coding control part 1 for the coding block, which is generated by dividing by the dividing part of block 2, is a mode of intercoding, send the coding block for the compensated motion preview part 5. The intraprevision part 4 performs a process of, when it receives the coding block, which is generated through the division by the block dividing part 2, from the selection switch 3, to carry out an intraprevision process on the coding block for generate a forecast image for each partition using an image signal already encoded in the frame based on the intraprevision parameter issued to it from the coding control part 1. After generating the forecast image mentioned above, the intraprevision part 4 selects a filter from one or more filters that are prepared in advance according to the states of the various parameters that need to be known when the moving image decoding device generates the same forecast image as the forecast image mentioned above, performs a filtering process in the forecast image mentioned above using the filter, and emits the forecast image in which the intra-forecast part performed the filtering process for a subtraction part 6 and an addition part 9. Concretely, the intra-forecast part, in a unique way, determines a filter according to the status of at least one of the following four parameters that are provided as the various mentioned parameters | 5 above * Parameter (1) The block size of the forecast image mentioned above * Parameter (2) The quantization parameter determined by the coding control part]! * Parameter (3) The distance between the image signal already encoded in the frame that is used when the preview image is generated and a target pixel to be filtered * Parameter (4) The intraprevention parameter determined by the coding control part 1 An intraprevision unit is comprised of the selection switch 3 and the intraprevision part 4. The compensated motion prediction part 5 performs a process, when an intercoding mode is selected by the coding control part 1 as a suitable coding for the coding block, which is generated by dividing by the block dividing part 2, perform a compensated motion prediction process on the coding block to generate a forecast image using one or more - image frames from reference stored in a compensated motion forecast frame memory 12 based on the interpretation parameters issued to it from the coding control part ions 1. The compensated motion prediction unit is comprised of the selection switch 3 and the compensated motion prediction part 5. The subtraction part 6 performs a process of subtracting the forecast image generated by the intraprevision part 4 or by the compensated motion forecast part 5 from the coding block, which is generated by dividing by the block dividing part 2, to generate one | 5 difference image (= the coding block - the forecast image). The subtraction part 6 builds a difference imaging unit. The transform / quantization part 7 performs a process of performing a transform process (eg, a DCT (discrete cosine transform) or an orthogonal transform process, such as a KL transform, in which bases are designed for a specific learning sequence in advance) on a difference signal generated by the minus part 6 in units of a block having a transform block size included in the prediction error coding parameters issued to it from the coding control part 1 , and also quantize the transform coefficients of the difference image using a quantization parameter included in the prediction error coding parameters to output the transform coefficients to which quantization was applied in this way as compressed data of the difference image. The transformed / quantizing part 7 builds an image compression unit. The reverse transform / reverse quantization part 8 performs a reverse quantization process of the compressed data emitted to it from the transformed / quantization part 7 using the quantization parameter included in the forecast error coding parameters - emitted at the same time from the coding control part 1, and performs an inverse transform process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transform process such as an inverse KL transform) on the compressed data to which it was applied thus reverse quantization in units of a block having the transform block size included in the prediction error coding parameters - to output the compressed data in which the reverse transform / reverse quantization part performed the reverse transform process as a signal of local decoded forecast error. | 5 The addition part 9 performs a process of adding the local decoded forecast error signal emitted to it from the reverse transform / reverse quantization part 8 and the forecast signal shows the forecast image generated by the intra-forecast part 4 or by the compensated motion prediction part 5 to generate an image signal — local decoded showing a local decoded image. An intraprevention memory 10 is a recording medium, such as a RAM, for storing the local decoded image shown by the local decoded image signal generated by the addition part 9 as an image that the intraprevision part 4 will use when performing the process. of intraprevision next time. A loop filter part 11 performs a process of compensating for an encoding distortion included in the local decoded image signal generated by the addition part 9, and outputting the local decoded image shown by the local decoded image signal in which it is apart from the loop filter. performed the coding distortion compensation for a compensated motion prediction frame memory 12 as a reference image. The compensated motion prediction frame memory 12 is a recording medium, such as a RAM, for storing the local decoded image in which the loop filter part 11 - performed the filtering process as a reference image that the part motion prediction 5 will use when performing the motion prediction process next time. The variable length coding part 13 performs a variable length coding process of the compressed data sent to it from the transformed / quantization part - 7, the coding mode and the error coding parameters of: prediction that are issued to it from the coding control part 1, and the intra-forecast parameters issued to it from the | 5 preview part 4 or the preview parameters issued to it from the compensated motion preview part 5 to generate a bit stream in which the encoded data of the compressed data, the encoded data of the encoding mode, the encoded data of the parameters of prediction error coding, and the encoded data of —inpredictor parameters or interpreter parameters are multiplexed. The variable length coding part 13 builds the variable length coding unit. Fig. 2 is a block diagram showing the moving image decoding device according to Mode 1 of the present invention. Referring to Fig 2, the variable length decoding part 51 performs a process of variable length decoding of the encoded data multiplexed in the bit stream to acquire the compressed data, the encoding mode, the prediction error coding parameters , and the intrapredictive parameters or —interpredictive parameters, which are associated with each coding block in which each frame of the video is hierarchically divided, and output the compressed data and the encoding parameters of the forecast error to a transform part inverse / reverse quantization 55, and also output the encoding mode and the intraprevision parameters or the - interpreter parameters for a selection switch 52. The variable length decoding part 51 constructs a variable length decoding unit. The selection switch 52 performs a process when, when the coding mode associated with the coding block, which is emitted from the variable length decoding part 51, is a mode of - intracoding, emitting the intraprediction parameters issued to the same from the variable length decoding part 51 to an intraprediction part 53, and, when the encoding mode is an intercoding mode, output the interpretation parameters issued to it from the variable length decoding part 51 to the compensated motion prediction part 54. the intraprevision part 53 performs a process of performing an intraframe prediction process on the coding block to generate a forecast image for each partition using an image signal already encoded in the frame based on the intra-forecast parameter issued to it from selection switch 52. After generating the forecast image mentioned above, the intra forecast 53 selects a filter from one or more filters that are prepared in advance according to the status of the various parameters that are known when the forecast image mentioned above is generated, performs a filtering process on the forecast image mentioned above using the filter, and outputs the forecast image in which the intra-forecast part performed the filtering process for an addition part 56. Specifically, the single-mode intra-forecast part determines a filter according to the state of at least one of the following four parameters that are provided as the various parameters mentioned above. The intraprevision part pre-determines one or more parameters to be used which are the same as one or more previously mentioned parameters, which are used by the moving image coding device. More specifically, the parameters that the moving image encoding device uses and those that the moving image decoding device uses are made to be equal to each other, in such a way that when the intraprevision part 4 performs the scanning process filtering using parameters (1) and (4) on the moving image encoding device, the intra-preview part 53 performs - similarly to filtering using parameters (1) and (4) on the moving image decoding device , for example. * Parameter (1) | 5 - The block size of the forecast image mentioned above * Parameter (2) The quantization parameter decoded by variable length by the decoding part by variable length 51 * Parameter (3) The distance between the image signal already decoded in the frame that is used when the forecast image is generated and a target pixel to be filtered * Parameter (4) The intraprevision parameter decoded by variable length by the variable length decoding part 51 An intraprevision unit is comprised by the selection switch 52 and the part intra-forecast 53. The motion prediction part 54 performs a process of performing a motion prediction process compensated on the coding block to generate a forecast image using one —or frames of reference images stored in a motion forecast frame memory compensated 59 based on the interpretation parameters issued to it from the selection switch 52. The compensated motion forecast unit is comprised of the selection switch 52 and the compensated motion forecast part & 4 The reverse transform / reverse quantization part 55 performs a reverse quantization process of the compressed data associated with the coding block, which is emitted to it from the variable length decoding part 51, using the quantization parameter included in the forecast error coding parameters issued to it starting . the variable length decoding part 51, and perform an inverse transform process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transform process such as an inverse KI transform) on the compressed data to which inverse quantization was thus applied in units of a block having the size: of transform block included in the prediction error coding parameters, and output the compressed data in which the reverse transform / inverse quantization part performed the process of inverse transform as a decoded forecast error signal (signal showing a pre-compressed difference image). The reverse transform / reverse quantization part 55 builds the difference imaging unit. The addition part 56 performs a process of adding the decoded forecast error signal emitted to it from the reverse transform / reverse quantization part 55 and the forecast signal showing the forecast image generated by the intra-forecast part 53 or the part compensated motion prediction 54 to generate a decoded image signal showing a decoded image. Addition part 56 builds a decoded imaging unit. An intraprevention memory 57 is a recording medium, such as a RAM, for storing the decoded image shown by the decoded image signal generated by the addition part 56 as an image that the intraprevision part 53 will use when performing the intraprevision process. next time. The loop filter part 58 performs a process of compensating for an encoding distortion included in the decoded image signal generated by the addition part 56, and outputting the decoded image shown by the decoded image signal in which the loop filter part performs the coding distortion compensation for a compensated motion prediction frame memory 59 as a reference image. The compensated motion prediction frame memory 59 is a means of: recording, such as a RAM, to store the decoded image in which the loop filter part 58 performs the filtering process as a reference image that the compensated motion prediction part 54 will use when performing the compensated motion prediction process the next time. In the example shown in Fig. 1, the coding control part 1, the block division part 2, the selection switch 3, the intraprevision part 4, the compensated motion preview part 5, the subtraction part 6 , the transform / quantize part 7, the reverse transform / reverse quantization part 8, the addition part 9, the loop filter part 11, and the variable length coding part 13, which are the components of the measuring device moving image encoding, can consist of pieces of hardware for exclusive use (eg, integrated circuits on each of which a CPU is mounted, microcomputers on a chip, or the like), respectively. Alternatively, the moving image encoding device may consist of a computer, and a program in which the processes performed by the coding control part 1, the block dividing part 2, the selection switch 3, the intraprevision 4, by the compensated movement forecasting part 5, by the subtraction part 6, by the transform / quantization part 7, by the reverse transform / inverse quantization part 8, by the addition part 9, by the loop filter part 11, and by the variable length coding part 13 are described, they can be stored in a computer's memory and the computer's CPU can be requested to execute the program stored in memory. Fig. 3 is a flow chart showing the processing carried out by the moving image encoding device according to Mode 1 of the present invention. - In the example shown in Fig. 2, the variable length decoding part 51, the selection switch 52, the intraprevision part 53, the compensated motion prediction part 54, the | 5 - reverse transform / reverse quantization 55, the addition part 56, and the loop filter part 58, which are the components of the moving image decoding device, may consist of hardware parts for exclusive use (e.g. , integrated circuits in each of which a CPU is mounted, microcomputers of a chip, or the like), respectively. Alternatively, the moving image decoding device may consist of a computer, and a program in which the processes carried out by the variable length decoding part 51, by the selection switch 52, by the intraprediction part 53, by the preview part motion compensated 54, by the reverse transform / reverse quantization part 55, by the addition part 56, and by the loop filter part 58 are described can be stored in a computer memory and the computer CPU can be requested to execute the program stored in memory. Fig. 4 is a flowchart showing the processing performed by the moving image decoding device according to the Modality | of the present invention. In the following, the operation of the moving image encoding device and that of the moving image decoding device will be explained. First, the processing performed by the moving image encoding device shown in Fig. 1 will be explained. First, the coding control part 1 determines a maximum size of each of the coding blocks which is a unit to be processed at a time when an intraprevision process (intraframe prediction process) or a compensated motion prediction process (interframe forecasting process) is performed, and also determines an upper limit on the number of hierarchical layers in a hierarchy in which each - of the coding blocks having the maximum size is hierarchically divided into blocks (step ST1 of Fig. 3) . As a method of determining the maximum size of each of the coding blocks, for example, it is considered a method of determining a maximum size for all figures according to the resolution of the entered image. In addition, it can be considered a method of quantifying a variation in complexity of a local movement of the input image as a parameter and then determining a small size - for a figure having a large and vigorous movement at the same time as determining a large size for the image. figure having a little movement. As a method of determining the upper limit on the number of hierarchical layers, for example, it can be considered a method of increasing the depth of the hierarchy, ie, the number of hierarchical layers to make it possible to detect a finer movement since the image entered it has a greater and more vigorous movement, or decreases the depth of the hierarchy, ie, the number of hierarchical layers since the input image has less movement. The coding control part 1 also selects a suitable coding mode for each of the coding blocks in which each coding block having the maximum size is divided hierarchically from one or more available coding modes (M intracoding modes and N intercoding modes) (step ST2). Although a detailed explanation of the selection method of selecting a - coding mode for use in the coding control part | is omitted because the selection method is a known technique, there is a method of performing a coding process on the coding block using an arbitrary coding mode available to examine the coding efficiency and select a coding mode having the highest level of coding efficiency among a plurality of available coding + modes, for example. : The coding control part 1 still determines a | quantization parameter and a transform block size that are i 5 - used when the difference image is compressed for each coding block, and also determines intra-forecast parameters or interpreter parameters that are used when a forecast process is performed. The coding control part 1 issues prediction error coding parameters including the quantization parameter and the transform block size for the transform / quantization part 7, for the reverse transform / reverse quantization part 8, and for the variable length coding part 13. The coding control part also outputs the forecast error coding parameters for the intraprevision part 4 as needed. When it receives the video signal showing the input image, the block division part 2 divides the input image shown by the video signal in the coding blocks each having the maximum size determined by the coding control part 1, and also divides each one of the coding blocks in hierarchical blocks until the number of layers - hierarchical reaches the upper limit in the number of hierarchical layers which is determined by the coding control part 1. Fig. 5 is an explanatory drawing showing a state in which each block of encoding having the maximum size is hierarchically divided into a plurality of encoding blocks. In the example in Fig. 5, each coding block having the maximum size is a coding block Bº in the 0 th hierarchical layer, and its luminance component has a size of a. MO). In addition, in the example of Fig. 5, performing the hierarchical division with this coding block Bº having the maximum size being configured as a starting point until the depth of the hierarchy reaches a predetermined depth that is configured separately according to a structure in quaternary tree, blocks of is coding B "can be acquired.: At depth of n, each block of coding B" is a region | of image having a size of (L ", M"). Although L "can be the same as or - differ from M", the case of L "= M" is shown in the example in Fig. 5. Hereinafter, the size of each coding block B "is defined as the size of (L ", M") in the luminance component of coding block B ". Since the division part of block 2 performs a division in a quaternary tree structure, (L "* !, M" "!) = (L" / 2, M "/ 2) is always established. - In the event of a color video image (4: 4: 4 aspect ratio) in which all color components have the same sample number, such as an RGB signal, all color components have a size of (L ", M" ), while in the case of a 4: 2: 0 format, a corresponding color difference component has an encoding block size of (L "/ 2, M" / 2). Hence, a coding that can be selected for each coding block B "in the nth hierarchical layer is expressed as m (B"). In the case of a color video signal consisting of a plurality of color components, the m (B ") encoding mode can be formed in such a way that an individual mode is used for each color component. Hereinafter , an explanation will be made assuming that the coding mode m (B ") indicates that for the luminance component of each coding block having a 4: 2: 0 format in a YUV signal unless specified otherwise. form. The m (B ") encoding mode can be one of one or more modes of intra-encoding (generally referred to as" INTRA ") or one or more intercoding modes (generally referred to as" INTER "), and the part coding control 1 selects, as the coding mode m (B "), a coding mode with the highest degree of coding efficiency for each coding block B" from all the coding modes available in the figure currently being processed or a subset. of these coding modes, as mentioned above.: Each B "coding block is further divided into one or | more forecast units (partitions) by the block division part, - as shown in Fig. 5. Hereafter, each partition belonging to each B coding block "is expressed as P;" (i shows a partition number in the nth hierarchical layer). According to the division of each coding block B "in P partitions;" belonging to the coding block B "is carried out, it is included as information in the coding mode m (B"). While the forecasting process is performed on each of all P; partitions "according to the m (B") coding mode, an individual forecast parameter can be selected for each P; partition. " coding 1 produces such a block splitting state as shown, for example, in Fig. 6 for a coding block having the maximum size, and then determines coding blocks B ". Hatched portions shown in Fig. 6 (a) show a partition distribution in which the coding block having the maximum size is divided, and Fig. 6 (b) shows a situation in which coding modes m (B ") are respectively assigned to the partitions generated by dividing the hierarchical layer using a graph in quaternary tree structures. Each node surrounded by a square symbol shown in Fig. 6 (b) is one (coding block B ") to which a coding mode m (B") is assigned. When the coding control part 1 selects an optimal coding mode m (B ") for each partition P;" of each coding block B ", and the coding mode m (B") is an intra coding mode (step ST3), the selection switch 3 emits partition P; "of coding block B", which is generated through from the division by the block division part 2, to the intraprevision part 4. In contrast, when the coding mode m (B ") is an intercoding mode (step ST3), the selection switch emits the partition P;" of the coding block B ", which is generated - by dividing by the block dividing part 2, for the compensated & motion prediction part 5. When it receives the partition P; "of the coding block B" from the selection switch 3, the intraprevision part 4 performs an intraprediction process on the partition P; "of the coding block B" to generate an intraprediction image P; "using an image signal already encoded in the frame based on the intraprevision parameter issued to it from the coding control part | (step ST4). After generating the intraprevision image P;" mentioned above, the intraprevision part 4 selects a filter from one or more filters that are prepared in advance according to the states of the various parameters that need to be known when the moving image decoding device generates the same image as the image intra-forecast P; ” mentioned above, and performs a filtering process on the intraprevision image P; "using the filter. After performing the filtering process on the intraprevision image P;", the intraprevision part 4 emits the intraprevision image P; "in which the Intraprevision part performed the filtering process for the subtraction part 6 and for the addition part 9. In order to allow the moving image decoding device shown in Fig. 2 to also be able to generate the same intraprevision image P ;, the intraprevision part issues the intraprevision parameters for the variable length coding part 13. The outline of the process performed by the intraprevision part 4 is as mentioned above, and the details of this process will be mentioned below. P;" of the coding block B "from the selection switch 3, the compensated motion prediction part 5 performs a process of compensated motion prediction in a partition P;" of the coding block B "to generate an interpreter image P; "using one or more frames of reference images. stored in a motion forecast frame memory. compensated 12 based on the interpretation parameters issued to it from coding control part 1 (step ST5). As the technology i 5 - to carry out a compensated motion forecasting process to generate a forecast image is known, the detailed explanation of this technology will be omitted hereinafter. After the intraprevision part 4 or the compensated motion prediction part 5 generates the forecast image (an image of —intraprevision P; "or an image of interprevision P;”), the subtraction part 6 subtracts the forecast image ( the intra-forecast image P; "or the inter-forecast image P;") generated by the intra-forecast part 4 or the compensated motion forecast part 5 from partition P; " of the coding block B ", which is generated by dividing by the division part of block 2, to generate a difference image, and emits a forecast error signal and” showing the difference image for the transform / quantization part 7 (step ST6). When you receive the forecast error signal e; ” showing the difference image from the subtraction part 6, the transform / quantization part 7 performs a transform process (eg, a DCT (discrete cosine transform) or an orthogonal transform process, such as a transform of KL, in which bases are designed for a specific learning sequence in advance) in a difference image in units of a block having the transform block size included in the prediction error coding parameters issued to it from the part of coding control 1, and quantizes the transform coefficients of the difference image using the quantization parameter included in the prediction error coding parameters and outputs the transform coefficients to which quantization was applied by this means, in the reverse transform part / quantization.inverse 8 and in the variable length coding part 13 as data. compressed difference image (step ST7). When you receive compressed difference image data | 5 from the transform / quantization part 7, the reverse transform / inverse quantization part 8 does reverse quantization of the compressed data of the difference image using the quantization parameter included in the forecast error coding parameters issued to it from the coding control part 1, performs an inverse transform process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transform process such as the inverse KL transform) on the compressed data to which quantization was applied hereby inverse, in units of a block having the transform block size included in the forecast error coding parameters, and output the compressed data in which the reverse transform / reverse quantization part performs the reverse transform process for the addition part 9 as a local decoded forecast error signal and; "hat (" "" appended to an alphabetic letter is expressed by ha t due to restrictions in electronic applications) (step ST8). When you receive the local decoded forecast error signal e; hat from the inverse transform / inverse quantization part 8, the addition part 9 adds the local decoded forecast error signal e; hat and the forecast signal showing the forecast image (the intra-forecast image Pj; "or the inter-forecast image P;") generated by the intra-forecast part 4 or - by the compensated motion forecast part 5 to generate a local decoded image which is a local decoded partition image P; "hat or a local decoded encoding block image which is a group of local decoded partition images (step ST9). After generating the local decoded image, addition part 9 stores a local decoded image signal showing the local decoded image in memory 10 - for intraprevision and also outputs the local decoded image signal to the loop filter part 11. The moving image encoding device i 5 - repeatedly performs the processes of the steps ST3 to ST9 until the moving image encoding device completes processing on all B "encoding blocks into which the input image is divided hierarchically, and, q When the processing is completed in all B "coding blocks, it moves to a process in step ST12 (steps ST10 and STII). The variable length coding part 13. entropy codes the compressed data emitted to it from the transform / quantization part 7, of the coding mode (including the information showing the state of the division in the coding blocks) and the parameters of prediction error coding, which are issued to it from coding control part 1, and the intraprevision parameters issued to it from intraprevision part 4 or to the interpreter parameters issued to it from the forecasting part motion compensated 5. The variable length encoding part 13 multiplexes data - encoded which are the encoded results of the entropy encoding of the compressed data, the encoding mode, the prediction error coding parameters, and the intrapredictive parameters or the interpreter parameters to generate a bit stream (step ST12). When it receives the local decoded image signal from - the addition part59, the loop filter part 11 compensates for an encoding distortion included in the local decoded image signal, and stores the local decoded image shown by the local decoded image signal in which the loop filter part compensates for the coding distortion in the compensated motion prediction frame memory 12 as a reference image (step ST13). The loop filter part 11. can carry out the filtering process for each coding block having the: maximum size of the local decoded image signal emitted to it from the addition part 9 or for each coding block of the local decoded image signal, or for each unit that it is a combination of a plurality of coding blocks each having the maximum size. Alternatively, after a picture of local decoded image signals is emitted, the loop filter part can perform the filtering process on the picture of local decoded image signals at a time. In the following, the process carried out by the intra-forecast unit 4 will be explained in detail. Fig. 7 is an explanatory drawing showing an example of the intra-forecast parameters (forecast modes) that can be selected for each partition P; "in the coding block B". In the example shown in Fig. 7, forecast modes and forecast direction vectors represented by each of the forecast modes are shown, and it is pointed out that a relative angle between forecast direction vectors becomes small with an increase in the number of modes prediction variables that can be selected. Intraprevision part 4 performs an intraprevision process on a partition P; "based on the intraprevision parameters for partition Pj;" and a selection parameter for a filter that the intra-forecast part uses to generate an intra-forecast image P; ". From here on, an intra-process of generating an intra-forecast signal from the luminance signal based on the intra-forecast parameter will be explained. (intra-forecast mode) - for the luminance signal of partition P; ". From now on, partition P; "is assumed to have a size of 1; x m;" pixel. Fig. 8 is an explanatory drawing showing an example of pixels that are used when a predicted value for each pixel is generated in partition P; "in the case of 1; = m;" = 4. Although (2 x li + 1) pixels on the upper partition already encoded that is adjacent to a partition P; "e (2 x m;”): pixels on the left partition already encoded that is adjacent to a partition P; " : are defined as the pixels used for forecasting in the example of Fig. 8, a greater or lesser number of pixels than the pixels shown in Fig. 8 can be used for forecasting. In addition, although a row or column of pixels adjacent to the partition is used for forecasting in the example shown in Fig. 8, two or more rows or columns of pixels adjacent to a partition can be used alternatively for forecasting. When the index value indicates the intraprevision mode for partition P; "is 2 (average forecast), the intraprevision part generates an intermediate forecast image using the average of the adjacent pixels in the upper partition and the adjacent pixels in the left partition as each of the predicted values of all pixels in the partition P; ". When the index value indicating the intraprevision mode is different from 2 (average forecast), the intraprevision part generates the predicted value of each pixel in partition P; "based on a prediction direction vector v, = (dx, dy) shown by the index value. In this case, the relative coordinate of the pixel (the pixel on the upper left edge of the partition is defined as the point of origin) for which the predicted value must be generated (target pixel for forecasting) on partition P; "is expressed as (x , y) Each reference pixel that is used for forecasting is located at an intersection point of A shown below and an adjacent pixel. A = C) + kv, Where & is a negative scalar value When a reference pixel is located at an integer - pixel position, the integer pixel is defined as the predicted value of the target pixel for forecasting. In contrast, when a reference pixel is not located at an integer pixel position, an interpolation pixel that is generated from an integer pixel. adjacent to the reference pixel is defined as the predicted pixel value: target for forecast. In the example shown in Fig. 8, because a reference pixel is not located in an integer pixel position, the predicted - value is interpolated from the values of two adjacent pixels for the reference pixel. However, the interpolation of the predicted value is not limited to that of the values of two adjacent pixels, and an interpolation pixel can be generated from two or more adjacent pixels and the value of this interpolation pixel can be set as the predicted value. The intraprevision part then performs a filtering process, which will be mentioned below, in the intermediate forecast image that consists of the predicted values in partition P; "generated according to the procedure mentioned above to acquire a final intraprevision image Pj" , and emits the intraprediction image P; "for the subtraction part 6 and for the addition part 9. The intraprediction part also emits the intraprediction parameter used to generate the intraprediction image P;" to the variable length encoding part 13 in order to multiplex them into a bit stream. From now on, the filtering process will be explained in a concrete way. The intraprevision part selects a filter to be used from one or more filters that are prepared in advance using a method that will be mentioned below, and performs a filtering process on each pixel of the intermediate forecast image according to the following equation ( 1). S (p.) = Ass (p,) + as (p,) +: + ay S (py1) + ay (1) In equation (1), one (n = O, 1, .., N) is filter coefficients consisting of coefficients (ao, a1, .., âàn.1) associated with the reference pixels, and a compensation coefficient ax. pn (n = O, 1, .., N-1) shows the reference pixels of the filter including the target pixel p, º to be filtered. N is an arbitrary number of reference pixels. s (pn) shows the luminance value. of each reference pixel, and s hat (po) shows the luminance value of the pixel: target po, to be filtered on which the filtering process was carried out. The filter coefficients can be formed in order not to include the compensation coefficient ax. In addition, the luminance value of each pixel of the intermediate forecast image can be defined as the luminance value s (pn) of each reference pixel located on partition P; ". As an alternative, the filtered luminance value can be defined as the luminance value S (Pn) only at the position of each pixel the filtering process was performed in. An encoded luminance value (luminance value to be decoded) is set to the luminance value s (pn) of each reference pixel located outside partition P; " when the pixel is in an already encoded region, while a signal value to be used in place of the luminance value s (pn) is selected from the luminance value s (pn) of each reference pixel located in partition P, which is defined in the manner mentioned above, and the luminance value encoded in the area already encoded according to a predetermined procedure (for example, the signal value of a pixel in the closest position is selected from those pixels that are candidates) when the pixel is in a region yet to be encoded. Fig 9 is an explanatory drawing showing an example of the arrangement of the reference pixels in the case of N = 5. When the filtering process mentioned above is performed, a non-linear border, or similar, occurs in the input image more easily and then a shift from the forecast direction of the intermediate forecast image occurs more easily with an increase in size (1; ” xm; ”) of partition P;”. Therefore, it is preferable to smooth the intermediate forecast image. In addition, the greater the quantified value that a forecast error has, the greater distortion of quantization occurs in the decoded image and therefore the lower degree of forecast accuracy has the intermediate forecast image generated from pixels already. encoded that are adjacent to partition P; ”. Therefore, it is preferable to: prepare a smoothed preview image that roughly expresses partition P; ". In addition, even a pixel on the same partition P;" has an i 5 - offset, such as an edge, occurring between the intermediate forecast image and the image entered more easily with distance from the already encoded pixels adjacent to the P partition; "which are used for the generation of the intermediate forecast image Therefore, it is preferable to smooth the forecast image to suppress the rapid increase in the forecast error that is - caused when a shift occurs. In addition, the intra-forecast when generating the intermediate forecast image is configured in such a way that it uses both of the following two distinct methods: an average forecast method of making all the forecast values in a forecast block equal to each other , and a prediction method using a prediction direction vector vp. In addition, also in the case of the forecast using a forecast direction vector v ,, a pixel not located in an integer pixel position is generated by interpolating in both, a pixel for which the value of a reference pixel in an integer pixel position it is set to - its predicted value as it is, and at least two reference pixels, the location in the prediction block of a pixel having the generated pixel value as its predicted value differs accordingly with the direction of a prediction direction vector vp. Therefore, because the preview image has a different property according to the intraprevision mode, and the optimal filtering process also changes according to the intraprevision mode, it is preferable to also change the intensity of the filter, the number of pixels of reference to be referenced by the filter, the arrangement of the reference pixels, etc. according to the index value showing the intra-forecast mode. Therefore, the filter selection process is configured in order to select a filter in consideration of the following four: parameters (1) to (4). : (1) The size of the partition P; "(1st xm;") (2) The quantization parameter included in the parameters of i 5 - prediction error coding (3) The distance between the group of pixels already encoded (" pixels that are used for forecasting "shown in Fig. 8) that are used when generating the intermediate forecast image, and the target pixel to be filtered (4) The index value indicating the intra-forecasting mode at the time of generating the intermediate forecast image. More specifically, the filter selection process is configured to use a filter having a greater degree of smoothing intensity or a filter having a greater number of reference pixels with an increase in the size (1; ”xm;”) of the partition P; ", with an increase in the quantified value determined by the quantization parameter, and with a distance between the target pixel to be filtered and the group of pixels already encoded that are located on the left and upper side of the P;" partition. An example of the distance between the target pixel to be filtered and the group of pixels already encoded that are located on the left and upper side of the P; "partition is listed in Fig. 11. In addition, the filter selection process is configured in such a way to also change the intensity of the filter, the number of reference pixels to be referenced by the filter, the arrangement of the reference pixels, etc. according to the index value showing the intraprediction mode. More specifically, an adaptive selection of a filter according to the parameters mentioned above is implemented resulting in an appropriate filter selected from the group of filters that are prepared in advance in correspondence with each of the combinations of the parameters mentioned above. In addition, for example, when parameters (3) and (4) are combined, the definition of the "distance between the pixel: target to be filtered and the group of pixels already encoded" of parameter (3) can be changed from adaptive mode according to the "intra-forecast mode" of parameter (4). More specifically, the definition of the distance between the target pixel to be filtered and the group of pixels already encoded is not limited to that fixed, as shown in Fig. 11, and can be a distance depending on the forecast direction, such as the distance from of a "reference pixel" shown in Fig. 8. In this way, the intraprevision part can implement an adaptive filtering process that also takes into account a relationship between the plurality of parameters such as parameters (3) and (4). In addition, a combination to perform no filtering process can be prepared as one of the combinations of these parameters while being matched to "no filtering process." In addition, as a definition of the filter strength, the weaker filter can be defined as "no filtering process." Furthermore, because the four parameters (1) to (4) are known in the moving image decoding device, no additional information to be encoded to carry out the filtering process mentioned above is generated. As previously explained, preparing a necessary number of filters in advance and adaptively selecting one of them, the intra-forecasting part switches between the filters. Alternatively, by defining the function of the aforementioned selection of filter parameters as each filter in such a way that a filter is computed according to the values of the aforementioned selection of - filter parameters, the intraprevision part can implement switching between the filters . Although the example of selecting a filter in consideration of the four parameters (1) to (4) is shown in the explanation above, a filter can alternatively be selected in consideration of at least one of the four parameters (1) to (4). Hereinafter, an example of the configuration of the: filtering process of alternatively selecting a filter: presenting an appropriate filter included in a group of filters prepared i in advance in correspondence with each of the combinations of the - parameters will be shown considering a case of use parameters (1) and (4) as an example. Filters that are used in the example mentioned above for the filtering process are defined as follows: Filter index filter of 1 (the number of reference pixels N = 3): ao = 3/4, ay = 1/8, a = 1/8 Filter index filter 2 (the number of reference pixels N = 3): a = 1/2, a = 1/4, a = 1/4 Filter index filter 3 (the number reference pixels N = 3): a9 = 1/4, ay = 3/8, a; = 3/8 Filter index filter of 4 (the number of reference pixels N = 5): a = 1/4, a, = 3/16, a; = 3/16, a; = 3/16, a4ç = 3/16 In this case, it is assumed that the filtering process is based on equation (1) from which the compensation coefficient ax is eliminated (av = O), three types of filters are used, and each of these filters has such an array of reference pixels to be referenced through it as shown in Fig. 2Ss Dm Fig. 13 is an explanatory drawing showing an example of a table showing filters that are used in each mode of intraprevision for each size of partition P; ". In this example, partition P is assumed;" has one of the possible sizes of 4x4 pixels, 8x8 pixels, 16x16 pixels, 32x32 pixels, and 64x64 pixels, and there is a correspondence, as shown «in Fig. 7, between index values, each showing a mode of: intra-forecast and intra-forecast directions. In addition, the O filter index shows that no filtering process is performed. In general, because there are Ú 5 trends, as will be shown below, when a directional forecast or an average forecast is used, pointing out which filter should be used in correspondence with each combination of parameters (1) and (4) in the table considering the characteristics of the intraprevision image, as shown in the table shown in Fig. 13, the intraprevision part can implement the selection of an appropriate filter referring to the table. Since a horizontal or vertical border exists on an object, such as a building, it is, in general, shaped in a linear and clean way in many cases, a high precision forecast can be performed using a vertical or horizontal forecast in many cases. Therefore, it is preferable —not perform any smoothing process when a horizontal or vertical forecast is made. As an image signal in general has high spatial continuity, it is preferable to perform a smoothing process on pixels located in the vicinity of the block margins on the left and - upper sides of the P; "partition, and thereby improving continuity when uses an average forecast that implies continuity between partition P; ” and already encoded pixels adjacent to partition P; ". As in a region having a diagonal directional direction, an edge, or similar, is distorted and has a non-linear shape in many - cases with an increase in the area of the region, it is preferable, when a diagonal forecast is used, to apply a filter having a greater degree of smoothing intensity and a greater number of reference pixels with increased partition size. In general, when a partition size becomes quite large, a spatial change in the signal value in the partition becomes: diversified, so that the use of a directional forecast or an average forecast results in a very rough forecast, and then a region where it is difficult to make a high precision forecast increases. As no - improvement in forecasting efficiency can be expected simply by performing a smoothing process to make an image out of focus in such a region, it is preferable not to perform any filtering process in the case of such a large partition size because it is not necessary to increase the computational complexity unnecessarily (for example, —table shown in Fig. 13 there is a configuration to not perform any filtering process in the case of a partition size of 32x32 pixels or more). In addition, in a case where the luminance value of the intermediate forecast image is used as the luminance value of each reference pixel when each reference pixel at the time a filtering process is carried out is a pixel on the P partition; ", there is a case where the filtering process can be simplified. For example, when the intra-forecast mode is a medium forecast, the filtering process on partition P; "can be simplified for the following filtering process for —cadegion shown in Fig. 14. Region A (pixel on the upper left edge of the partition P; ”) Filter index filter 1 (no change): a, = 3/4, a, = 1/8, a; = 1/8 (the number of reference pixels N = 3) Index filter filter number 2 (no change): a, = 1/2, a, = 1/4, a; = 1/4 (the number of reference pixels N = 3) Filter index filter 3 (no change ): a = 1/4, a, = 3/8, a; = 3/8 (the number of reference pixels N = 3): Filter index filter of 4: ao = 5/8, a, = 3/16, a; = 3/16 (the number of reference pixels N = 3) Region B (pixels at the upper end of partition P; ”different from region A) Filter index filter 1: ao = 7/8, a; = 1/8 (the number of reference pixels N = 2) Filter index filter 2: ao = 3/4, a; = 1/4 (the number of reference pixels N = 2) Filter index filter 3: a, = 5/8, a; = 3/8 (the number of reference pixels N = 2) Filter index filter 4: a = 13/16, a; = 3/16 (the number of reference pixels N = 2) Region C (pixels at the left end of partition P; ”different from region A) Filter index filter of 1: ao = 7/8, a, = 1 / 8 (the number of reference pixels N = 2) Filter index filter 2: ao = 3/4, a, = 1/4 (the number of reference pixels N = 2) Filter index filter 3: ao = 5/8, a, = 3/8 (the number of reference pixels N = 2) Filter index filter of 4: ao = 13/16, a7 = 3/16 (the number of pixels of reference N = 2) Region D (pixels in partition P; "different from regions A, B, and C) Filters for all filter indexes: no filtering process Even if the filtering process is simplified in the manner mentioned above , the results of the filtering process are the same as those of the filtering process yet to be simplified.: By removing the redundant parts of the actual process in this way, the filtering process can be accelerated. Although the table shown in Fig. 13 is used in the example - mentioned above, another table can be used instead. For example, when greater importance is placed on a reduction in computational complexity caused by the filtering process, than the degree of improvement in coding performance, a table shown in Fig. 19 can be used instead of the table shown in Fig. 13 As the intra-forecast unit performs the filtering process only on the average forecast of partition P; "whose size is 4x4 pixels, 8x8 pixels, or 16x16 pixels in the case of using this table, the number of forecast modes in each of which the filtering process is performed is less than that in the case of using the table shown in Fig. 13, and, therefore, the increase in computational complexity caused by the filtering process can be reduced. At this point, using a simplification of the filtering process in the case where the intraprevention mode mentioned above is an average forecast, the filtering process can be implemented with very low computational complexity. In addition, when importance is placed on ease of implementation, the intraprevision unit can perform the filtering process only in the medium forecast, as in the case of performing the filtering process mentioned above, and can use the same filter (eg, the filter index filter 2) all the time without having to change the filter to be used according to the size of partition P; ". In that case, while the degree of improvement in coding performance using the filter is reduced by a degree corresponding to the elimination of the process according to the size of partition P; ", the scale of the circuit of an intra-forecast unit installed in the device (the number of lines in the code in case of implementing the intra-forecast unit via software) can be reduced. This filtering process is simplified for a filter that takes into account: only parameter (4) among the four parameters (1) to (4). The filtering process does not have to be implemented in a way in which a filter having a corresponding filter index is selected by reference to the table, and can alternatively be implemented in a way in which the filter is installed directly in the intra-forecast part. For example, the filtering process is implemented in a way in which a filtering process to be carried out for each of the possible sizes of partition P; "is incorporated directly into the" preview, "or a filtering process to be carried out for each pixel position in each of the possible sizes of the P partition; " it is incorporated directly into the intraprevision part. While the forecast image that is acquired as the result of carrying out the filtering process without referring to the table in this way is equivalent to that acquired as the result of carrying out the filtering process referring to the table, the form of implementation is not a problem. In addition, although the method of using only one table to switch between filters is explained in the example mentioned above, two or more tables as mentioned above can be prepared, and the moving image encoding device can be constructed in such a way. to encode a filter selection table index 100 as header information in such form as shown in or from Figs. 15 to 18, and switch between the filter selection table for each predetermined unit. For example, by adding the filter selection table index 100 to the sequence level header, as shown in Fig. 15, the moving image encoding device can perform a filtering process more suited to the characteristics of the sequence when compared in case you only use a single table. Even in a case in which the intraprevision part 4 is constructed in such a way as to configure adjacent encoded pixels : to partition P; "in which the intra-forecasting part performed the process of: smoothing as the reference pixels when generating an intermediate forecast image of partition P;", as in a case in which a - smoothing process is performed on the reference image at the time of an intraprevision in an 8x8 pixel block in MPEG-4 AVC / H.264 explained previously, the intraprevision part 4 can perform the filtering process in an intermediate forecast image similar to that shown in the example mentioned above. On the other hand, because there is a - overlap between the effect of the smoothing process on the reference pixels when generating an intermediate forecast image and those of the filtering process on the intermediate forecast image, there is a case in which even if both processes are used simultaneously, only a very small performance improvement is produced when compared to a case in which one of the processes is performed. Therefore, in a case where importance is placed on reducing computational complexity, the intraprevision part can be constructed in such a way as not to perform the filtering process on the intermediate forecast image of partition P; "for which the part of intraprevision performed the - smoothing process in the reference pixels when generating the intermediate forecast image. For example, there may be a case in which when the filtering process in the intermediate forecast image is performed, the intra-forecasting part performs the filtering process only over an average forecast, as shown in the table in Fig. 19, while - when performing the smoothing process on the reference pixels at the moment when the generation of the intermediate forecast image is performed, the intra-forecasting part performs the smoothing process referring to the table, as shown in Fig. 20, showing that only specific forecasts directional are subjected to the smoothing process. In Fig. 20, 'l' shows that the smoothing process is performed and '0' shows that the smoothing process ã is not performed. The intraprevision part issues the intraprevision parameter used to generate the intraprevision image Pi for the variable length coding part 13 in order to multiplex them into a bit stream. The intraprevision part also performs an intraprevision process based on the intraprevision parameter (intraprevision mode) on each of the color difference signals of partition P; "according to the same procedure as that according to any part of —Intraprevision performs the intraprevision process on the luminance signal, and issues the intraprevision parameters used to generate the intraprevision image for the variable length coding part 13. The intraprevision part can be constructed in such a way as to carry out the process filtering mentioned above for the intraprevision of each of the color difference signals in the same way that the intraprevision part does for the luminance signal, or not to perform the filtering process mentioned above for the intraprevision of each of the luminance signals color difference.Then, the processing performed by the moving image decoding device shown in Fig. 2 will be explained. When it receives the bit stream emitted to it from the image encoding device of Fig. 1, the variable length decoding part 51 performs a variable length decoding process in the bit stream to decode information having a frame size in units of a sequence consisting of one or more frames of figures or in units of a figure (step ST21 of Fig. 4). The variable length decoding part 51 determines a maximum size of each of the coding blocks which is a unit to be processed at a time when an intraprevision process (intraframe prediction process) or a compensated motion prediction process ( interframe forecasting process) is performed according to the same procedure as that: that the coding control part 1 shown in Fig. 1 uses, and also determines an upper limit on the number of hierarchical layers in one | hierarchy in which each of the coding blocks having the maximum size is hierarchically divided into blocks (step ST22). For example, when the maximum size of each encoding block is determined according to the resolution of the image entered in the image encoding device, the variable length decoding part determines the maximum size of each of the encoding blocks with based on the frame size information that the variable length decoding part previously decoded. When information showing both, the maximum size of each coding block and the upper limit on the number of hierarchical layers are multiplexed in the bit stream, the variable length decoding part refers to the information that is acquired by decoding the bit stream . As the information showing the division status of each of the Bº encoding blocks having the maximum size is included in the m (B%) encoding mode of the Bº encoding block having the maximum size that is multiplexed in the bit stream, the part variable length decoding module 51 specifies each of the B "encoding blocks in which the image is divided hierarchically by decoding the bit stream to acquire the encoding mode m (B) of the B encoding block having the maximum size that is multiplexed in the bit stream (step ST23). After specifying each of the B "encoding blocks, the decoding part - by variable length 51 decodes the bit stream to acquire the m (B") encoding mode of the B "encoding block. to specify each P partition; ” belonging to coding block B "based on information about partition P;" belonging to the coding mode m (B "). After specifying each partition P;" belonging to the coding block B '", the variable length decoding part 51 decodes the encoded data to acquire the data: compressed, the coding mode, the error coding parameters of: prediction, and the intraprevision parameter / parameter interpretation for each l partition P "(step ST24). '5 More specifically, when the encoding mode m (B ") assigned to the encoding block B" is an intracoding mode, the variable length decoding part decodes the encoded data to acquire the intraprevision parameter for each partition P ; "belonging to the coding block. In contrast, when the coding mode m (B") assigned to coding block B "is an intercoding mode, the variable length decoding part decodes the encoded data to acquire the parameters of interprevision for each partition P; " belonging to the coding block. The variable length decoding part further divides each partition which is a forecast unit into one or more partitions which is a transform process unit based on the transform block size information included in the forecast error coding parameters, and decodes the encoded data for each of one or more partitions which is a transform process unit to acquire the compressed data (transform coefficients in which transform and quantization are performed) of the partition. When the coding mode m (B ") of the partition Pj;" belonging to the coding block B ", which is specified by the variable length decoding part 51, is an intra-coding mode - (stepST25), the selection switch 52 emits the intra-forecast parameters issued to it from the length decoding part variable 51 for the intra-forecast part 53. Conversely, when the coding mode m (B ") of partition P; ' is an intercoding mode (step ST25), the selection switch emits the interpreter parameters issued to it from the variable length decoding part 51 to the compensated motion prediction part 54.: When it receives the intraprevision parameter from the selection switch 52, the intraprevision part 53 performs an intraframe prediction process on the partition P; "of the coding block B" to generate an intraprevision image P; "using an already encoded image signal in the table based on the intraprevision parameter (step ST26), as the intraprevision part 4 shown in Fig. 1. After generating the intraprevision image P; " mentioned above, the intra-preview part 53 selects a filter from one or more filters, which are prepared in advance, according to the status of the various parameters that are known at the time of generating the intra-preview image P; "mentioned above using the same method that the one that the intraprevision part 4 shown in Fig. 1 uses, and performs a filtering process on the intraprevision image P; " using the filter and configures the intraprevision image P; "in which the intraprevision part performed the filtering process as a final intraprevision image. More specifically, the intraprevision part selects a filter using the same parameters as that which the intraprevision part 4 uses for filter selection and using the same method as the filter selection method that the intraprevision part 4 uses, and performs the filtering process in the intra-forecast image. For example, in a case in which the intra-forecast part 4 causes the case of not carrying out the filtering process in correspondence with the filter index of 0, and still causes four filters that are prepared in advance in correspondence with - filter indexes of | to 4 respectively, and performs the filtering process referring to the table shown in Fig. 13, the intraprevision part 53 is constructed in such a way as to also define the same filters and filter indices as those for use in the intraprevision part 4, and perform a filter selection according to the size of the partition P; "and the index showing an intraprevision mode that is an intraprevision parameter referring to the table shown in Fig. 13 and perform the filtering process.: In addition, in a in which case a table to define a filter that is used for each combination of parameters is prepared, and the intraprevision part: 5 implements switching between filters referring to the table, as shown in the example mentioned above, the intraprevision part is constructed in such a way In order to decode the filter selection table index 100 as header information in a form as shown in or from Figs. 15 to 18, select the table shown by the table index —decod filter selection result 100 from the same group of tables as the one that the moving image encoding device uses, the table group being prepared in advance, and selecting a filter referring to the table. When receiving the interpretation parameters from the —selection switch 52, the compensated motion prediction part 54 performs a compensated motion prediction process over partition P; "of coding block B" to generate an interpretation image P ; "using one or more frames of reference images stored in a compensated motion prediction frame memory 59 based on - interpretation parameters (step ST27). The reverse transform / reverse quantization part 55 does reverse quantization of the compressed data associated with the coding block, which are output to it from the variable length decoding part 51, using the quantization parameter included in the - coding parameters of forecast error emitted from the variable length decoding part 51, and performs an inverse transform process (eg, an inverse DCT (inverse discrete cosine transform) or an inverse transform process such as a Inverse KI) in the compressed data to which inverse quantization was thus applied in units of a block having the transform block size included in the error coding parameters of: prediction, and output the compressed data in which the transformed part | inverse / inverse quantization performed the inverse transform process for '5 the addition part 56 as a decoded prediction error signal (signal showing the pre-compressed difference image) (step ST28). When it receives the decoded forecast error signal from the reverse transform / reverse quantization part 55, the addition part 56 generates a decoded image by adding the decoded forecast error signal and the forecast signal showing the forecast image generated by the intraprevision part 53 or compensated motion prediction part 54 and stores a decoded image signal showing the decoded image in memory 57 for intraprevision, and also outputs the decoded image signal to loop filter part 58 (step ST29) . The moving image decoding device repeatedly performs the steps from steps ST23 to ST29 until the moving image decoding device completes processing on all B "encoding blocks into which the image is hierarchically divided (step ST30). When receives the image signal — decoded from the addition part 56, the loop filter part 58 compensates for an encoding distortion included in the decoded image signal, and stores the decoded image shown by the decoded image signal in which the part loop filter compensates for coding distortion in the motion prediction frame memory - offset 59 as a reference image (step ST31). Loop filter part 58 can perform the filtering process for each coding block having the maximum size of the local decoded image signal emitted to it from addition part 56 or each coding block. Alternatively, after the local decoded image signal corresponding to all macro blocks on a screen is emitted, the loop filter part can perform the filtering process at all. macros blocks one screen at a time. As can be seen from the description above, because the intraprevision part 4 of the moving image coding device according to this modality 1 is constructed in such a way that, when performing an intraframe prediction process to generate a intrapredictive image using an image signal already encoded in a frame, select a filter from one or more filters that are prepared in advance according to the states of various parameters associated with the encoding of a target block to be filtered, and perform a filtering process over a forecast image using the filter, an advantage is provided of being able to reduce forecast errors that occur locally, and thereby being able to improve image quality. In addition, because the intraprevision part 4 according to this modality 1 is constructed in such a way as to select a filter taking into account at least one of the following parameters: (1) the size of partition P; ” (l "º x mj;"); (2) the quantization parameter included in the forecast error coding parameters; (3) the distance between the group of pixels already encoded that are used when generating the intermediate forecast image, and the target pixel to be filtered; and (4) the index value indicating the intra-forecast mode when generating the intermediate forecast image, provides an advantage of preventing a local forecast error from occurring when, for example, an edge of the image to be - coded becomes slightly distorted in a non-linear shape or a slight displacement occurs at the angle of an edge in the image to be encoded when performing a directional forecast, and preventing a forecast error from occurring in a margin between blocks due to a loss of continuity with the signal of an adjacent pixel already encoded for the partition when s1 makes an average forecast, and thereby being able to improve the efficiency: of forecast. : As the intraprevision part 53 of the moving image decoding device according to this mode 1 is - constructed in such a way that, when performing an intraframe prediction process to generate an intraprevision image using an image signal already encoded in a frame, select a filter from one or more filters that are prepared in advance according to the states of various parameters associated with the decoding of a target block to be filtered, and perform a filtering process on an image of prediction using the filter, an advantage is provided to reduce forecast errors that occur locally while making it possible for the moving image decoding device to also generate the same intrapredictive image as that generated by the moving image encoding device. In addition, because the intra-forecast part 53 according to this mode 1 is constructed in such a way as to select a filter taking into account at least one of the following parameters: (1) the size of the partition P; "(1º xm;”) ; (2) the quantization parameter included in - prediction error coding parameters; (3) the distance between the group of pixels already encoded that are used when generating the intermediate forecast image, and the target pixel to be filtered, and (4) the index value indicating the intra-forecast mode at the time of generating the intermediate forecast image, provides an advantage of preventing a local forecast error from occurring when, for example, an edge of the image to be encoded becomes slightly distorted in a non-linear shape or a slight shift occurs at the angle of an edge in the image to be encoded when making a directional forecast, and preventing a forecast error from occurring in a margin between blocks due to a loss of continuity age with a pixel signal already encoded for the partition when performing an average & prediction, and another advantage of making it possible for the motion picture decoding device to also generate the same intrapredictive image as that generated by the motion coding device moving image. Mode 2. Although the example in which the intra-forecast part 4 selects a filter according to the states of various parameters associated with the coding of a target block to be filtered from one or more filters that are prepared in advance, and performs a filtering process over a forecast image using the filter when performing an intraframe forecasting process to generate an intraprediction image using an image signal already encoded in a frame is shown in Mode 1 mentioned above. Alternatively, a Wiener filter that minimizes the sum of square errors between a coding block and a forecast image can be projected, and when the use of this Wiener filter increases the degree of reduction in forecast errors when compared to the Using the filter that was selected from one or more filters that are prepared in advance, the filtering process can be performed on the forecast image using the Wiener filter mentioned above, instead of the filter that was selected. From now on, processes will be explained in a concrete way. Each of the intraprevention parts 4 and 53 according to Modality 1 mentioned above is constructed in such a way as to select a —filter from one or more filters that are prepared in advance according to the states of various parameters associated with the coding of a target block to be filtered. While each of the intra-forecast parts can select an appropriate filter from one or more selection candidates in consideration of the four parameters (1) to (4), each of the intra-forecast parts s3 cannot perform "optimal filtering" when a optimal filter. different from one or more selection candidates that exists. This mode 2 is characterized by the fact that while a moving image encoding device designs an optimal filter on a per-figure basis and '5 - performs a filtering process, and also encodes the filter's filter coefficients, and so on. onwards, a moving image decoding device decodes the filter coefficients and so on, and performs a filtering process using the filter. An intraprevision part 4 of the moving image coding device performs an intraframe prediction process on each partition P; "of each coding block B" to generate an intraprevision image P; ", such as that according to the Modality 1 mentioned above: Intraprevision part 4 also selects a filter from one or more filters that are prepared in advance according to the states of various parameters associated with the coding of a target block to be filtered using the same method as the one that the intraprevision part in accordance with Modality 1 mentioned above uses, and performs a filtering process on the intraprevision image P; " using this filter. After determining the intraprevention parameters for each one, of all the coding blocks B "" in the figure, for each area in which an identical filter is used within the figure (each area having the same filter index), the intraprevision part 4 designs a Wiener filter that minimizes the sum of square errors between the image entered in the area and the intraprediction image (mean square error in the target area). The filter coefficients w of the Wiener filter can be determined from an autocorrelation matrix Ry, an intermediate image prediction signal s', and a cross correlation matrix Rs. of the input image signal s and the intermediate image forecast signal s' according to the following equation (4). The size of Ryy and Rsg 'matrices corresponds to the number of filter outputs determined. 'w = RiL-R, (4): After designing the Wiener filter, the intraprevision part 4 expresses the sum of square errors in the target area for the filter design in case of carrying out a filtering process using the Wiener filter as DI, a - number of codes at the time of encoding information (eg, filter coefficients) associated with the Wiener filter as R1, and the sum of square errors in the target area for filter design in case of carrying out a process filtering using a filter that is selected using the same method as that shown in Mode 1 mentioned above as D2, and then checks to see whether the following equation (5) is or is not established. DI + I - R1 <D2 (5) Where) is a constant. When equation (5) is established, the intraprevision part 4 performs a filtering process using the Wiener filter instead of a filter that is selected using the same method as that shown in Modality | mentioned above. Conversely, when equation (5) is not established, the intraprevision part performs a filtering process using a filter that the intraprevision part selects using the same method as that shown in Mode 1 mentioned above. Although the intraprevision part performs the evaluation using the sums of squared errors DI and —D32, this model is not limited to this example. The intraprevision part can alternatively perform the evaluation using measurements showing other forecast distortion values, such as the sum of absolute error values, instead of the sum of squared errors D1 and D2. When performing a filtering process using the Wiener filter, the intraprevision part 4 requires filter update information showing the filter coefficients of the Wiener filter and indexes each indicating a corresponding filter that is replaced by the Wiener filter. More specifically, when the number of filters likely to be. selected in the filtering process using filter selection parameters is expressed as L, and indexes ranging from zero to L-1 are assigned to the filters, respectively, when the designed Wiener filter is used for each - index, a value of " 1 "needs to be encoded for the index as the filter update information, whereas when a prepared filter is used for each index, an" O "value needs to be encoded for the index as the filter update information. The variable length coding part 13 encodes the filter update information emitted to it from the intra-forecast part 4 by variable length, and multiplexes encoded data from the filter update information in a bit stream. Although the example of designing a Wiener filter that minimizes the average square error between the input image and a forecast image in each area for which an identical filter is used within a figure for the area is shown, in this modality, a filter Wiener's method that minimizes the mean square error between the input image and a forecast image in each area for which an identical filter is used can be designed for each of the other specific areas, each of which is not a figure. For example, the design mentioned above for a Wiener filter can be performed only for a specific specific figure or only when a specific condition is met (eg, only for the figure for which a scene change detection function is added and in which the scene change is detected). The variable length decoding part 51 of a moving image decoding device decodes the multiplexed encoded data in the bit stream by variable length to acquire the filter update information. An intraprevision part 53 performs an intraframe prediction process on each partition P; "of each coding block B" to generate the intraprevision image P; ", such as that according to Modality 1 mentioned above. When it receives: filter update information from the variable length decoding part 51, the intra-preview part 53 refers to the filter update information to check whether or not there is an update for the filter - indicated by the corresponding index. When determining from the result of the verification that the filter for a given area is replaced by a Wiener filter, the intra-forecast part 53 Ilê the filter coefficients of the Wiener filter that are included in the filter update information to specify the filter by Wiener, and performs a filtering process on the intrapredictive image P; ” using the Wiener filter. On the contrary, for an area in which no filter is replaced by a Wiener filter, the intraprevision part selects a filter using the same method as that which the intraprevision part according to Modality 1 mentioned above uses, and performs a process filtering in the intraprevision image P; "using the filter. As can be seen from the description above, because the moving image encoding device according to this modality 2 is built in such a way as to design a Wiener filter that minimizes the sum of square errors between a block of - coding and a forecast image, and when the use of this Wiener filter increases the degree of reduction in forecast errors when compared to the use of a filter that is selected from one or more filters that are prepared in advance, perform a process filtering in the forecast image using the Wiener filter, instead of the selected filter, an advantage is provided of being able to further reduce forecast errors that occur locally when compared to the Mode 1 mentioned above. While the invention has been described in its preferred embodiments, it is to be understood that an arbitrary combination of two or more of the modalities mentioned above can be made, several changes can be made to an arbitrary component in accordance with any one. modalities mentioned above, and an arbitrary component according to any of the modalities mentioned above can be omitted within the scope of the invention. - INDUSTRIAL APPLICABILITY The present invention is suitable for an image encoding device that needs to encode a moving image with a high degree of efficiency, and it is also suitable for a moving image decoding device that needs to decode an image - moving encoded with a high degree of efficiency. EXPLANATIONS OF REFERENCE NUMBERS 1 - coding control part (coding control unit), 2 - block dividing part (block dividing unit), 3 - selection switch (intraprediction unit and motion forecasting unit) compensated), 4 - part of intraprediction (unit of intraprevision), 5 - part of prediction of compensated movement (unit of prediction of compensated movement), 6 - part of subtraction (unit of generation of difference image), 7 - part of transform / quantization (image compression unit), 8 - reverse transform / inverse quantization part, 9 - addition part, 10 - intra-forecast memory, 11 - loop filtering part, 12 - motion forecast frame memory compensated, 13 - variable length coding part (variable length coding unit), 31 - variable length decoding part (variable length decoding unit), 52 - selection switch (u intraprediction unit and compensated motion forecasting unit), 53 - intraprediction part (intrapredicting unit), 54 - compensated motion forecasting part (compensated motion forecasting unit), 55 - reverse transform / reverse quantization part ( difference imaging unit), 56 - addition part (decoded imaging unit),: 57 - intra-forecast memory, 58 - loop filtering part, 12 - memory: compensated motion forecast frame, 100 - filter selection table index.
权利要求:
Claims (22) [1] CLAIMS. 1. Moving image coding device characterized by the fact that it comprises: a coding control unit for determining a maximum size of a coding block which is a unit to be processed at a time when a forecasting process is carried out , and also determine a maximum hierarchical depth at a time when a coding block having the maximum size is divided hierarchically, and to select a coding mode that determines a method of coding each coding block from one or more available coding; a block dividing unit for dividing an image entered into encoding blocks each having a predetermined size, and also dividing each of the mentioned encoding blocks hierarchically until their hierarchical number reaches the maximum hierarchical depth determined by the control unit of mentioned coding; an intra-forecasting unit for, when an intra-coding mode is selected by the coding control unit - - mentioned as an encoding mode corresponding to one of the encoding blocks in which the input image is divided by the mentioned block division unit, perform an intraframe prediction process to generate a prediction image using an image signal already encoded in a board; a difference image generating unit to generate a difference image between one of the coding blocks in which the input image is divided by the mentioned block division unit, and the forecast image generated by the mentioned intraprediction unit; an image compression unit for compressing the difference image generated by the difference image generating unit. mentioned, and to send compressed data of the mentioned difference image; and a variable length coding unit for: 5 variable length coding of the compressed data emitted from the mentioned image compression unit and the coding mode selected by the mentioned coding control unit to generate a bit stream in which data encoded from the compressed data mentioned and encoded data from the mentioned encoding mode are —multiplexed, in which when generating the forecast image, the mentioned intraprevention unit selects a predetermined filter from one or more filters that are prepared in advance, performs a filtering process on the mentioned forecast image using the mentioned filter, and emits the forecast image on which the mentioned intra-forecast unit performed the filtering process for the mentioned difference imaging unit. [2] 2. Moving image encoding device according to claim 1, characterized in that the mentioned moving image encoding device includes a compensated motion prediction unit for when an intercoding mode is selected by the moving image unit. mentioned coding control as the coding mode corresponding to one of the coding blocks in which the input image is divided by the mentioned block dividing unit, perform a compensated motion prediction process on the mentioned coding block to generate an image of prediction using a reference image, and the difference imaging unit generates a difference image between one of the coding blocks in which the input image is divided by the mentioned block division unit, and the forecast image generated by the unit from: mentioned intraprevision or by the mentioned compensated movement forecast unit. | [3] 3. Moving image coding device, according to claim 2, characterized by the fact that the coding control unit determines a quantization parameter and a transform block size that are used when the difference image is compressed for each of the coding blocks, and also determines intraprevision parameters or interpreter parameters that are - used when the prediction process is performed for each of the coding blocks, the image compression unit performs a transform process on the difference image generated by the difference imaging unit in the units of a block having the transform block size determined by the mentioned coding control unit and also quantizes the transformation coefficients of the mentioned difference image using the parameter of quantization determined by the mentioned coding control unit to emit the coefficients transforms before which quantization was applied by this means as the compressed data of the mentioned difference image, and, - when encoding by variable length the compressed data emitted from the mentioned image compression unit and the encoding mode selected by the unit mentioned coding control unit, the variable length coding unit codes the variable prediction parameters or the interpretation parameters which - are determined by the mentioned coding control unit, and the quantization parameter and the transform block size to generate a bit stream in which the encoded data of the mentioned compressed data, the encoded data of the mentioned encoding mode, encoded data of the mentioned intraprevention parameters or of the mentioned interpreter parameters, encoded data of the parameter. mentioned quantization, and coded data of the mentioned: transform size are multiplexed. | [4] 4. U 5 moving image encoding device - according to claim 3, characterized by the fact that the intraprevention unit selects a filter that is used for the filtering process in consideration of at least one of a size of block in which the intra-forecast unit performs the intra-forecast, the quantization parameter determined by the coding control unit, a distance between the image signal already encoded in the frame that is used when the forecast image is generated and a target pixel to be filtered, and the intra-forecast parameters determined by the mentioned coding control unit. [5] 5. Moving image coding device, according to claim 4, characterized by the fact that when the intraprevision parameters determined by the coding control unit show an average forecast, the intraprevision unit performs the image filtering process forecast generated by this. [6] 6. Moving image coding device, according to claim 4, characterized by the fact that a filter selection table showing the filter that is used for the filtering process is prepared for each combination, which is taken into account when the intraprevision unit selects a filter, from two or more of the block size, in which the intraprevision unit performs the intraprevision, the quantization parameter determined by the mentioned - coding control unit, the distance between the image signal already encoded in the frame that is used when the forecast image is generated and the target pixel to be filtered, and the intraprevention parameters determined by the mentioned coding control unit, and the intraprevision unit selects a filter that is used for the process of filtering referring to the mentioned table. : [7] 7. Moving image coding device, according to claim 6, characterized by the fact that when several types of filter selection tables are prepared, the coding control unit emits a filter selection table index showing a table to which the intraprevention unit refers when selecting a filter, and the variable length coding unit includes the filter selection table index issued to it from the coding control unit mentioned in a stream header bits. [8] 8. Moving image coding device, according to claim 1, characterized by the fact that the intraprevision unit designs a Wiener filter that minimizes a sum of square errors between one of the coding blocks in which the input image is divided by the mentioned block division unit, and the forecast image, and, when using the mentioned Wiener filter increases a degree of reduction in forecast errors when compared to using the filter selected from one or more filters that are prepared in advance, performs the filtering process on the forecast image using the aforementioned Wiener filter, instead of the filter that the intra-forecast unit selected, and outputs the forecast image on which the mentioned intra-forecast unit performed the filtering process for the unit difference image generation unit, and the variable length coding unit codes the filter coefficients of the Wien filter er designed by the mentioned and multiplexed intraprediction unit - data encoded from the filter coefficients mentioned in the bit stream. [9] 9. Moving image decoding device characterized by the fact that it comprises: a variable length decoding unit to decode multiplexed encoded data in a bit stream to acquire the compressed data and a way of: encoding that are associated with each of the coding blocks in which an image is hierarchically divided; an intra-predictive unit so that, when an encoding mode associated with an encoding block that is decoded by variable length by the variable length decoding unit mentioned is an intracoding mode, perform an intraframe forecasting process to generate a forecast image using an image signal already encoded in a frame; a difference image generating unit for generating a pre-compressed difference image from compressed data associated with the variable length decoding encoding block by the mentioned variable length decoding unit; and a decoded imaging unit to add the difference image generated by the aforementioned difference imaging unit and the forecast image generated by the mentioned intraprevision unit to generate a decoded image, where when it generates the forecast image, the intra-forecast unit - mentioned selects a predetermined filter from one or more filters that are prepared in advance, performs a filtering process on the mentioned forecast image using the mentioned filter, and outputs the forecast image in which the intra-forecast unit mentioned has performed the filtering process for the decoded image generation unit — mentioned. [10] Moving image decoding device according to claim 9, characterized in that the mentioned moving image encoding device includes a compensated motion prediction unit for, when the coding mode associated with the block encoding, which is decoded by variable length by the variable length decoding unit, is an intercoding mode, performing a motion prediction process compensated on the mentioned coding block to generate a forecast image using a reference image, and the decoded imaging unit adds the difference image generated by the difference imaging unit and the forecast image generated by the intra-forecast unit or the compensated motion forecast unit mentioned to generate a decoded image. [11] 11. Moving image decoding device according to claim 10, characterized by the fact that the variable length decoding unit decodes the multiplexed encoded data in the bit stream by variable length to acquire compressed data, a coding mode, intraprevention parameters or interpreter parameters, a quantization parameter, and a transform block size that are associated with each of the coding blocks, and the difference imaging unit does reverse quantization of the compressed data associated with the block encoding which is decoded by variable length by the aforementioned variable length decoding unit using the quantization parameter associated with the mentioned coding block and performs a reverse transform process on the compressed data to which inverse quantization was thereby applied in units of a block having the size of a block that of transform mentioned to generate a pre-compressed difference image. [12] 12. Moving image decoding device according to claim 11, characterized by the fact that the intraprevention unit selects a filter that is used for the filtering process taking into account at least one of a block size in the which intra-forecast unit performs intra-forecast, the decoded quantization parameter: by variable length by the variable length decoding unit, the distance between the image signal already encoded in the frame that is used when generating the forecast image and a pixel target to be filtered, and the '5 - intraprevention parameters decoded by variable length by the variable length decoding unit mentioned. [13] 13. Moving image decoding device according to Claim 12, characterized in that when the intraprevention parameters decoded by variable length by the variable length decoding unit show an average forecast, the intraprevision unit performs the process of filtering on the forecast image generated thereby. [14] 14. Moving image decoding device according to claim 12, characterized by the fact that a filter selection table showing the filter that is used for the filtering process is prepared for each combination, which is taken into account when the intra-forecast unit selects a filter, from two or more of the block size in which the intra-forecast unit performs the intra-forecast, of the quantization parameter decoded by variable length by the decoding unit by variable length, the distance between the image signal already encoded in the frame that is used when generating the forecast image and the target pixel to be filtered, and the intraprevention parameters decoded by variable length by the mentioned variable length decoding unit, and the intraprevention unit selects a - filter6 is used for the filtering process referring to the mentioned table. [15] 15. Moving image decoding device, according to claim 14, characterized by the fact that when various types of filter selection tables are prepared, the intraprevision unit selects a filter that is used for the filtering process referring to to a table for filter selection which is shown by an index of: filter selection table decoded by variable length by E decoding unit by variable length. | [16] 16. Moving image decoding device, '5 - according to claim 9, characterized by the fact that when the variable length decoding unit decodes by variable length the encoded data multiplexed in the bit stream to acquire filter coefficients of a Wiener filter, the intraprevision unit performs the filtering process on the forecast image using the mentioned Wiener filter instead of the filter that the intraprevision unit selected from the one or more filters that are prepared in advance, and outputs the image prediction in which the intraprevision unit performed the filtering process for the decoded image generation unit. s [17] 17. Moving image coding method, i 15 characterized by the fact that it comprises: a coding processing control step of a coding control unit determining a maximum size of a coding block that is a unit to be processed in a time when a forecasting process is carried out, and also determining a maximum hierarchical depth at a time when a coding block having the maximum size is divided hierarchically, and selecting a coding mode that determines a coding method for each coding block from one or more available encoding modes; a block division processing step of a block division unit by dividing an image entered into coding blocks each having a predetermined size, and also dividing each of the mentioned coding blocks hierarchically until their hierarchical number reaches depth maximum hierarchical value determined in the mentioned coding control unit; an intraprecision processing step of, when an intra-coding mode is selected in the coding control unit mentioned as a coding mode corresponding to one of the coding blocks: 5 in which the input image is divided in the dividing processing step mentioned block, an intraprevision unit performing an intraframe forecasting process to generate a forecast image using an image signal already encoded in a frame; an image generation processing step of 10 difference of a difference imaging unit to generate a difference image between the one of the coding blocks in which the input image is divided in the block division processing step, and the image - ”forecast generated in the intraprevision processing stage; an image compression processing step of an image compression unit compressing the difference image generated in the difference image generation processing step; and a variable length coding processing step of a variable length coding unit coding for variable length the compressed data emitted in the image compression processing step and the coding mode selected in the coding control processing step to generate a bit stream in which encoded data from the mentioned compressed data and encoded data from the mentioned encoding mode are multiplexed, where, when generating the forecast image in the intraprevision process step, a predetermined filter is selected from one or more filters that are prepared in advance, a filtering process is performed on the mentioned forecast image using the filter mentioned, and the forecast image in which the filtering process was carried out is sent to the difference imaging unit: mentioned. [18] 18. Moving image encoding method, from | according to claim 17, characterized by the fact that in the | - Intra-forecast processing, a filter that is used for the filtering process is selected in consideration of at least one of a block size on which the intra-forecast unit performs the intra-forecast, the quantization parameter determined in the control processing step coding, a distance between the image signal already encoded in the frame - which is used when generating the forecast image and a target pixel to be filtered, and the intraprevision parameters determined in the coding control processing step mentioned. - [19] 19. Moving image encoding method, according to claim 18, characterized by the fact that in the intraprevision processing step, the filtering process is performed on the forecast image generated when the intraprevision parameters determined in the processing step coding controls show an average forecast. [20] 20. Moving image decoding method, characterized by the fact that it comprises: a variable length decoding processing step of a variable length decoding unit decodes the multiplexed encoded data in a bit stream to acquire compressed data by variable length. and an encoding mode that is associated with each of the encoding blocks into which an image is hierarchically divided; an intraprevision processing step of, when an encoding mode associated with an encoding block that is decoded by variable length in the aforementioned variable length decoding processing step is a mode of: intracoding, an intraprevention unit performing a process of : intraframe forecasting to generate a forecast image using an image i signal already encoded in a frame; a difference imaging step of a difference imaging unit generating a pre-compressed difference image from the compressed data associated with the variable length decoded encoding block in the variable length decoding processing step ; and a decoded image processing step by adding the difference image generated in the difference image generation step and the forecast image that generated in the intrapredictive processing step to generate a decoded image, where l 15 when generates the forecast image in the intra-forecast processing step, a predetermined filter is selected from one or more filters that are prepared in advance, a filtering process is performed on the mentioned forecast image using the mentioned filter, and the forecast image in which the filtering process was carried out is sent to the decoded image generation unit. [21] 21. Moving image decoding method according to claim 20, characterized by the fact that in the intraprevision processing step, a filter that is used for the filtering process is selected taking into account at least one of a size of the - block on which the intraprevision unit performs the intraprevision, of the quantization parameter decoded by variable length in the variable length decoding processing step, of a distance between the image signal already encoded in the frame that is used when generating the forecast image and a target pixel to be filtered, and the intraprevention parameters decoded by variable length in the step of: decoding processing by variable length mentioned. [22] 22. Moving image decoding method according to claim 21, characterized by the fact that in the '5 intraprevision processing step, the filtering process is performed on the forecast image generated when the intraprevision parameters decoded by length variable in the variable length decoding processing step show an average forecast.
类似技术:
公开号 | 公开日 | 专利标题 BR112013016961A2|2020-06-30|image encoding and decoding devices, image encoding and decoding methods and image prediction device RU2674306C1|2018-12-06|Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method JP5992070B2|2016-09-14|Image decoding apparatus, image decoding method, image encoding apparatus, image encoding method, and data structure of encoded data JP2018113702A|2018-07-19|Image encoding device, dynamic image encoding method, dynamic image decoding device, dynamic image decoding method, dynamic image encoded data, and recording medium
同族专利:
公开号 | 公开日 CA2961824C|2019-07-23| RU2648578C1|2018-03-26| SG10201810141WA|2018-12-28| CN105915926A|2016-08-31| US10205944B2|2019-02-12| KR101604959B1|2016-03-18| EP2665274A1|2013-11-20| CN105872566B|2019-03-01| CA3000998C|2018-10-30| JPWO2012096150A1|2014-06-09| US20160156930A1|2016-06-02| TW202106007A|2021-02-01| HK1224114A1|2017-08-11| RU2565038C2|2015-10-10| CA2961824A1|2012-07-19| CA2979455A1|2012-07-19| US9609326B2|2017-03-28| SG191845A1|2013-08-30| TW201624420A|2016-07-01| KR20130124539A|2013-11-14| CA2979455C|2018-09-04| US10931946B2|2021-02-23| RU2654153C1|2018-05-16| CN105872565B|2019-03-08| TW201946456A|2019-12-01| CA3017317A1|2012-07-19| US9736478B2|2017-08-15| US9628797B2|2017-04-18| EP2665274A4|2017-04-12| KR20160087925A|2016-07-22| TWI620150B|2018-04-01| JP2016042731A|2016-03-31| JP2014132766A|2014-07-17| CN105959706B|2021-01-08| HK1224113A1|2017-08-11| US20160156931A1|2016-06-02| KR101643528B1|2016-07-27| SG10202008690XA|2020-10-29| CN103299637A|2013-09-11| US20160156929A1|2016-06-02| US20160309147A1|2016-10-20| RU2013137436A|2015-02-20| US20190124328A1|2019-04-25| HK1224112A1|2017-08-11| KR101609490B1|2016-04-05| KR101678351B1|2016-11-21| JP6091583B2|2017-03-08| KR101547041B1|2015-08-24| JP2016042732A|2016-03-31| CN105959706A|2016-09-21| JP2017077036A|2017-04-20| RU2610294C1|2017-02-08| JP2017118571A|2017-06-29| TWI711296B|2020-11-21| CA3000998A1|2012-07-19| US9414073B2|2016-08-09| CA2823503A1|2012-07-19| JP5478740B2|2014-04-23| MX2013008119A|2013-08-12| WO2012096150A1|2012-07-19| JP5674972B2|2015-02-25| TW201724023A|2017-07-01| TWI673687B|2019-10-01| KR101643121B1|2016-07-26| TW201820269A|2018-06-01| US20130287312A1|2013-10-31| US9299133B2|2016-03-29| CN103299637B|2016-06-29| SG10201403553XA|2014-08-28| TWI529664B|2016-04-11| RU2648575C1|2018-03-26| CA3017317C|2019-11-05| CA2823503C|2019-03-19| CN105915926B|2019-05-21| US20160112706A1|2016-04-21| JP6091584B2|2017-03-08| KR20160038081A|2016-04-06| JP5840285B2|2016-01-06| KR20160038082A|2016-04-06| CN105872566A|2016-08-17| KR20150029001A|2015-03-17| HK1225541A1|2017-09-08| HK1186323A1|2014-03-07| TW201243773A|2012-11-01| TWI579803B|2017-04-21| JP2015084573A|2015-04-30| JP2017118572A|2017-06-29| CN105872565A|2016-08-17| KR20140074397A|2014-06-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH08237669A|1995-02-28|1996-09-13|Sony Corp|Picture signal processor, picture signal processing method and picture signal decoder| US6041145A|1995-11-02|2000-03-21|Matsushita Electric Industrial Co., Ltd.|Device and method for smoothing picture signal, device and method for encoding picture and device and method for decoding picture| JP3392307B2|1995-11-02|2003-03-31|松下電器産業株式会社|Image signal smoothing apparatus and image signal smoothing method| JPH10224790A|1997-02-07|1998-08-21|Matsushita Electric Ind Co Ltd|Filter eliminating block noise in companded image and filter method| US6188799B1|1997-02-07|2001-02-13|Matsushita Electric Industrial Co., Ltd.|Method and apparatus for removing noise in still and moving pictures| JP3095140B2|1997-03-10|2000-10-03|三星電子株式会社|One-dimensional signal adaptive filter and filtering method for reducing blocking effect| US6181382B1|1998-04-03|2001-01-30|Miranda Technologies Inc.|HDTV up converter| US6122321A|1998-05-12|2000-09-19|Hitachi America, Ltd.|Methods and apparatus for reducing the complexity of inverse quantization operations| US6584154B1|1998-11-26|2003-06-24|Oki Electric Industry Co., Ltd.|Moving-picture coding and decoding method and apparatus with reduced computational cost| CN101448162B|2001-12-17|2013-01-02|微软公司|Method for processing video image| US7289672B2|2002-05-28|2007-10-30|Sharp Laboratories Of America, Inc.|Methods and systems for image intra-prediction mode estimation| US7386048B2|2002-05-28|2008-06-10|Sharp Laboratories Of America, Inc.|Methods and systems for image intra-prediction mode organization| WO2004023819A2|2002-09-06|2004-03-18|Koninklijke Philips Electronics N.V.|Content-adaptive multiple description motion compensation for improved efficiency and error resilience| AU2003279015A1|2002-09-27|2004-04-19|Videosoft, Inc.|Real-time video coding/decoding| US7266247B2|2002-09-30|2007-09-04|Samsung Electronics Co., Ltd.|Image coding method and apparatus using spatial predictive coding of chrominance and image decoding method and apparatus| US7010044B2|2003-07-18|2006-03-07|Lsi Logic Corporation|Intra 4×4 modes 3, 7 and 8 availability determination intra estimation and compensation| US7426308B2|2003-07-18|2008-09-16|Microsoft Corporation|Intraframe and interframe interlace coding and decoding| US8085846B2|2004-08-24|2011-12-27|Thomson Licensing|Method and apparatus for decoding hybrid intra-inter coded blocks| CN101917621A|2003-08-26|2010-12-15|汤姆森特许公司|Method and apparatus for decoding hybrid intra-inter coded blocks| US7724827B2|2003-09-07|2010-05-25|Microsoft Corporation|Multi-layer run level encoding and decoding| CN100534192C|2003-10-28|2009-08-26|松下电器产业株式会社|Intra-picture prediction coding method| EP2224724A3|2003-12-27|2012-04-11|Samsung Electronics Co., Ltd.|Image encoding and decoding method using residue sampling| CN100536573C|2004-01-16|2009-09-02|北京工业大学|Inframe prediction method used for video frequency coding| JP2006032999A|2004-07-12|2006-02-02|Sharp Corp|Image decoding device and image decoding method| JP4495580B2|2004-12-13|2010-07-07|パナソニック株式会社|In-plane prediction apparatus and in-plane prediction method| CN1819657A|2005-02-07|2006-08-16|松下电器产业株式会社|Image coding apparatus and image coding method| JP2006246431A|2005-02-07|2006-09-14|Matsushita Electric Ind Co Ltd|Image coding apparatus and method| CN101133650B|2005-04-01|2010-05-19|松下电器产业株式会社|Image decoding apparatus and image decoding method| US8948246B2|2005-04-11|2015-02-03|Broadcom Corporation|Method and system for spatial prediction in a video encoder| US8913660B2|2005-04-14|2014-12-16|Fastvdo, Llc|Device and method for fast block-matching motion estimation in video encoders| KR100703200B1|2005-06-29|2007-04-06|한국산업기술대학교산학협력단|Intra-coding apparatus and method| JP2007043651A|2005-07-05|2007-02-15|Ntt Docomo Inc|Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program| US20090123066A1|2005-07-22|2009-05-14|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, image decoding program, computer readable recording medium having image encoding program recorded therein,| JP4650173B2|2005-09-05|2011-03-16|ソニー株式会社|Encoding apparatus, encoding method, encoding method program, and recording medium recording the encoding method program| KR100727972B1|2005-09-06|2007-06-14|삼성전자주식회사|Method and apparatus for intra prediction of video| KR100750128B1|2005-09-06|2007-08-21|삼성전자주식회사|Method and apparatus for intra prediction of video| KR100750136B1|2005-11-02|2007-08-21|삼성전자주식회사|Method and apparatus for encoding and decoding of video| WO2007063808A1|2005-11-30|2007-06-07|Kabushiki Kaisha Toshiba|Image encoding/image decoding method and image encoding/image decoding apparatus| JP4530288B2|2006-02-15|2010-08-25|Kddi株式会社|Image processing apparatus and prefilter control apparatus| KR101246294B1|2006-03-03|2013-03-21|삼성전자주식회사|Method of and apparatus for video intraprediction encoding/decoding| JP5143120B2|2006-03-23|2013-02-13|サムスンエレクトロニクスカンパニーリミテッド|Image encoding method and apparatus, decoding method and apparatus| JP4417919B2|2006-03-31|2010-02-17|株式会社東芝|Image encoding apparatus and image decoding apparatus| KR100745765B1|2006-04-13|2007-08-02|삼성전자주식회사|Apparatus and method for intra prediction of an image data, apparatus and method for encoding of an image data, apparatus and method for intra prediction compensation of an image data, apparatus and method for decoding of an image data| EP2056606A1|2006-07-28|2009-05-06|Kabushiki Kaisha Toshiba|Image encoding and decoding method and apparatus| JP4789200B2|2006-08-07|2011-10-12|ルネサスエレクトロニクス株式会社|Functional module for executing either video encoding or video decoding and semiconductor integrated circuit including the same| EP2136564A1|2007-01-09|2009-12-23|Kabushiki Kaisha Toshiba|Image encoding and decoding method and device| JP5026092B2|2007-01-12|2012-09-12|三菱電機株式会社|Moving picture decoding apparatus and moving picture decoding method| KR101369746B1|2007-01-22|2014-03-07|삼성전자주식회사|Method and apparatus for Video encoding and decoding using adaptive interpolation filter| KR101403338B1|2007-03-23|2014-06-09|삼성전자주식회사|Method and apparatus for image encoding, decoding| JP4707118B2|2007-03-28|2011-06-22|株式会社Kddi研究所|Intra prediction method for moving picture coding apparatus and moving picture decoding apparatus| WO2008120577A1|2007-03-29|2008-10-09|Kabushiki Kaisha Toshiba|Image coding and decoding method, and apparatus| JP4799477B2|2007-05-08|2011-10-26|キヤノン株式会社|Image coding apparatus and image coding method| US8422803B2|2007-06-28|2013-04-16|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method and image decoding method| JP4650461B2|2007-07-13|2011-03-16|ソニー株式会社|Encoding device, encoding method, program, and recording medium| KR101159292B1|2007-10-15|2012-06-22|니폰덴신뎅와 가부시키가이샤|Image encoding device and decoding device, image encoding method and decoding method, program for the devices and the methods, and recording medium recording program| KR101375664B1|2007-10-29|2014-03-20|삼성전자주식회사|Method and apparatus of encoding/decoding image using diffusion property of image| CN101163249B|2007-11-20|2010-07-21|北京工业大学|DC mode prediction technique| US8576906B2|2008-01-08|2013-11-05|Telefonaktiebolaget L M Ericsson |Adaptive filtering| JP5035029B2|2008-03-03|2012-09-26|ソニー株式会社|Signal processing apparatus and method, and program| KR101591825B1|2008-03-27|2016-02-18|엘지전자 주식회사|A method and an apparatus for encoding or decoding of a video signal| KR101705138B1|2008-04-11|2017-02-09|톰슨 라이센싱|Deblocking filtering for displaced intra prediction and template matching| CN103297771B|2008-04-25|2017-03-01|汤姆森许可贸易公司|Multiple view video coding using the disparity estimation based on depth information| US20110038418A1|2008-04-25|2011-02-17|Thomson Licensing|Code of depth signal| JP2009302776A|2008-06-11|2009-12-24|Canon Inc|Image encoding device, control method thereof, and computer program| KR101028387B1|2008-07-01|2011-04-13|이정호|Blocks For Subbase And Road Paving Method With The Same Blocks| KR101517768B1|2008-07-02|2015-05-06|삼성전자주식회사|Method and apparatus for encoding video and method and apparatus for decoding video| KR101432775B1|2008-09-08|2014-08-22|에스케이텔레콤 주식회사|Video Encoding/Decoding Method and Apparatus Using Arbitrary Pixel in Subblock| US8831103B2|2008-10-02|2014-09-09|Sony Corporation|Image processing apparatus and method| CN101437239A|2008-12-11|2009-05-20|吉林大学|Real time sensor signal network transmission method based on linear prediction filtering| JP2010141632A|2008-12-12|2010-06-24|Hitachi Ltd|Video reproduction device, video system, and method of converting reproduction speed of video| US9196059B2|2009-01-29|2015-11-24|Lg Electronics Inc.|Method and apparatus for processing video signals using boundary intra coding| TWI440363B|2009-02-19|2014-06-01|Sony Corp|Image processing apparatus and method| CN101820546A|2009-02-27|2010-09-01|源见科技(苏州)有限公司|Intra-frame prediction method| TW201041405A|2009-03-06|2010-11-16|Sony Corp|Image processing device and method| JP4833309B2|2009-03-06|2011-12-07|株式会社東芝|Video compression encoding device| CN101505425B|2009-03-11|2011-11-23|北京中星微电子有限公司|Macro block filtering method and apparatus| JP5158003B2|2009-04-14|2013-03-06|ソニー株式会社|Image coding apparatus, image coding method, and computer program| JP2010258738A|2009-04-24|2010-11-11|Sony Corp|Image processing apparatus, method and program| US9113168B2|2009-05-12|2015-08-18|Lg Electronics Inc.|Method and apparatus of processing a video signal| KR20100132429A|2009-06-09|2010-12-17|삼성전자주식회사|Image encoding apparatus and image decoding apparatus for transmitting effectively large volume image| EP2262267A1|2009-06-10|2010-12-15|Panasonic Corporation|Filter coefficient coding scheme for video coding| US20110002386A1|2009-07-06|2011-01-06|Mediatek Singapore Pte. Ltd.|Video encoder and method for performing intra-prediction and video data compression| WO2011090783A1|2010-01-19|2011-07-28|Thomson Licensing|Methods and apparatus for reduced complexity template matching prediction for video encoding and decoding| TWI562600B|2010-02-08|2016-12-11|Nokia Technologies Oy|An apparatus, a method and a computer program for video coding| US8879632B2|2010-02-18|2014-11-04|Qualcomm Incorporated|Fixed point implementation for geometric motion partitioning| KR101432771B1|2010-03-05|2014-08-26|에스케이텔레콤 주식회사|Video encoding apparatus and method therefor, and video decoding apparatus and method therefor| WO2011125313A1|2010-04-09|2011-10-13|三菱電機株式会社|Video encoding device and video decoding device| KR20110113561A|2010-04-09|2011-10-17|한국전자통신연구원|Method and apparatus for intra prediction encoding and decoding using adaptive filter| WO2011127964A2|2010-04-13|2011-10-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction| US8837592B2|2010-04-14|2014-09-16|Mediatek Inc.|Method for performing local motion vector derivation during video coding of a coding unit, and associated apparatus| WO2011132400A1|2010-04-22|2011-10-27|パナソニック株式会社|Image coding method and image decoding method| US20110261880A1|2010-04-27|2011-10-27|Sony Corporation|Boundary adaptive intra prediction for improving subjective video quality| KR101690253B1|2010-05-06|2016-12-27|삼성전자주식회사|Image processing method and Apparatus| WO2011145819A2|2010-05-19|2011-11-24|에스케이텔레콤 주식회사|Image encoding/decoding device and method| RU2580073C2|2010-07-15|2016-04-10|Мицубиси Электрик Корпорейшн|Device for coding of moving images, device for decoding of moving images, method for coding of moving images and method for decoding of moving images| US20120014445A1|2010-07-16|2012-01-19|Sharp Laboratories Of America, Inc.|System for low resolution power reduction using low resolution data| US8538177B2|2010-07-30|2013-09-17|Microsoft Corporation|Line and pixel based methods for intra frame coding| JP2012034213A|2010-07-30|2012-02-16|Toshiba Corp|Image processing device, image processing system and image processing method| US8503528B2|2010-09-15|2013-08-06|Google Inc.|System and method for encoding video using temporal filter| KR101563835B1|2010-09-30|2015-10-27|미쓰비시덴키 가부시키가이샤|Image decoding device, image decoding method, image encoding device, and image encoding method| US8654860B2|2010-11-01|2014-02-18|Mediatek Inc.|Apparatus and method for high efficiency video coding using flexible slice structure| KR101772046B1|2010-11-04|2017-08-29|에스케이텔레콤 주식회사|Video Encoding/Decoding Method and Apparatus for Intra-Predicting Using Filtered Value of Pixel According to Prediction Mode| US9693054B2|2010-12-22|2017-06-27|Lg Electronics Inc.|Intra prediction method and apparatus based on interpolation| RU2610294C1|2011-01-12|2017-02-08|Мицубиси Электрик Корпорейшн|Image encoding device, image decoding device, image encoding method and image decoding method| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| CN107801024B|2017-11-09|2019-07-12|北京大学深圳研究生院|A kind of boundary filtering method for intra prediction|KR20120012385A|2010-07-31|2012-02-09|오수미|Intra prediction coding apparatus| KR101373814B1|2010-07-31|2014-03-18|엠앤케이홀딩스 주식회사|Apparatus of generating prediction block| US9008175B2|2010-10-01|2015-04-14|Qualcomm Incorporated|Intra smoothing filter for video coding| RU2610294C1|2011-01-12|2017-02-08|Мицубиси Электрик Корпорейшн|Image encoding device, image decoding device, image encoding method and image decoding method| JP5857244B2|2011-03-07|2016-02-10|パナソニックIpマネジメント株式会社|Motion compensation device, video encoding device, video decoding device, motion compensation method, program, and integrated circuit| EP3678373A1|2011-06-20|2020-07-08|HFI Innovation Inc.|Method and apparatus of directional intra prediction| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| CA3073053C|2011-06-24|2021-11-16|Mitsubishi Electric Corporation|Intra prediction of a processing block using a predicted value which is proportional to the amount of change in the horizontal direction of the signal value of a pixel adjacent to the left of the processing block| JP5972687B2|2012-07-02|2016-08-17|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive coding program, moving picture predictive decoding apparatus, moving picture predictive decoding method, and moving picture predictive decoding program| JP5798539B2|2012-09-24|2015-10-21|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method| JPWO2014049981A1|2012-09-28|2016-08-22|三菱電機株式会社|Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, and moving picture decoding method| JP6324016B2|2012-12-28|2018-05-16|キヤノン株式会社|Image encoding device, image encoding method and program, image decoding device, image decoding method and program| US9621902B2|2013-02-28|2017-04-11|Google Inc.|Multi-stream optimization| EP3104614A4|2014-02-03|2017-09-13|Mitsubishi Electric Corporation|Image encoding device, image decoding device, encoded stream conversion device, image encoding method, and image decoding method| US9210662B1|2014-05-29|2015-12-08|Apple Inc.|Adaptive battery life extension| JP5933086B2|2015-08-21|2016-06-08|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method| FR3040578A1|2015-08-31|2017-03-03|Orange|IMAGE ENCODING AND DECODING METHOD, IMAGE ENCODING AND DECODING DEVICE AND CORRESPONDING COMPUTER PROGRAMS| CN108293111A|2015-10-16|2018-07-17|Lg电子株式会社|For improving the filtering method and device predicted in image encoding system| KR20170108367A|2016-03-17|2017-09-27|세종대학교산학협력단|Method and apparatus for processing a video signal based on intra prediction| JP6088689B2|2016-04-28|2017-03-01|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method| KR20190040000A|2016-09-05|2019-04-16|엘지전자 주식회사|Image encoding / decoding method and apparatus therefor| EP3506632A4|2016-10-10|2019-08-07|Samsung Electronics Co., Ltd.|Method and device for encoding or decoding encoding unit of picture outline| US20190268594A1|2016-11-28|2019-08-29|Electronics And Telecommunications Research Institute|Method and device for filtering| JP6242517B2|2017-02-03|2017-12-06|株式会社Nttドコモ|Moving picture predictive decoding apparatus and moving picture predictive decoding method| US10728548B2|2017-04-04|2020-07-28|Futurewei Technologies, Inc.|Processing reference samples used for intra-prediction of a picture block| US10225578B2|2017-05-09|2019-03-05|Google Llc|Intra-prediction edge filtering| WO2018230624A1|2017-06-16|2018-12-20|エーザイ・アール・アンド・ディー・マネジメント株式会社|Modified nucleic acid monomer compound and oligonucleic acid analog| US10992939B2|2017-10-23|2021-04-27|Google Llc|Directional intra-prediction coding| JP6408681B2|2017-11-07|2018-10-17|株式会社Nttドコモ|Video predictive decoding method| WO2019137730A1|2018-01-11|2019-07-18|Telefonaktiebolaget Lm Ericsson |Multiple boundary filtering| KR20190090731A|2018-01-25|2019-08-02|주식회사 윌러스표준기술연구소|Method and apparatus for processing video signal| WO2019157717A1|2018-02-14|2019-08-22|北京大学|Motion compensation method and device, and computer system| CN110290384A|2018-03-19|2019-09-27|华为技术有限公司|Image filtering method, device and Video Codec| EP3780603A1|2018-03-30|2021-02-17|Nippon Hoso Kyokai|Intra prediction device, image encoding device, image decoding device, and program| SG11202013040VA|2018-06-25|2021-01-28|Guangdong Oppo Mobile Telecommunications Corp Ltd|Intra-frame prediction method and device| US10701376B2|2018-07-05|2020-06-30|Awecom, Inc.|Resilient image compression and decompression| US10778972B1|2019-02-27|2020-09-15|Google Llc|Adaptive filter intra prediction modes in image/video compression| RU2726160C1|2019-04-29|2020-07-09|Самсунг Электроникс Ко., Лтд.|Repeated synthesis of image using direct deformation of image, pass discriminator and coordinate-based remodelling| CN114175650A|2019-07-26|2022-03-11|北京字节跳动网络技术有限公司|Interdependence of transform size and coding tree unit size in video coding and decoding|
法律状态:
2020-07-14| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/32 Ipc: H04N 19/11 (2014.01), H04N 19/117 (2014.01), H04N | 2020-07-14| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-07-21| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2011-004038|2011-01-12| JP2011004038|2011-01-12| PCT/JP2012/000061|WO2012096150A1|2011-01-12|2012-01-06|Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|